24 resultados para exponential sums
em Helda - Digital Repository of University of Helsinki
Resumo:
This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.
Resumo:
The Baltic Sea was studied with respect to selected organic contaminants and their ecotoxicology. The research consisted of analyses of total hydrocarbons, polycyclic aromatic hydrocarbons, bile metabolites, hepatic ethoxyresorufin-O-deethylase (EROD) activity, polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). The contaminants were measured from various matrices, such as seawater, sediment and biota. The methods of analysis were evaluated and refined to comparability of the results. Polyaromatic hydrocarbons, originating from petroleum, are known to be among the most harmful substances to the marine environment. In Baltic subsurface water, seasonal dependence of the total hydrocarbon concentrations (THCs) was seen. Although concentrations of parent polycyclic aromatic hydrocarbons (PAHs) in sediment surface varied between 64 and 5161 ug kg-1 (dw), concentrations above 860 ug kg-1 (dw) were found in all the studied sub-basins of the Baltic Sea. Concentrations commonly considered to substantially increase the risk of liver disease and reproductive impairment in fish, as well as potential effects on growth (above 1000 ug kg-1 dw), were found in all the studied sub-basins of the Baltic Sea except Kattegat. Thus, considerable pollution in sediments was indicated. In bivalves, the sums of 12 PAHs varied on a wet weight basis between 44 and 298 ug kg-1 (ww). The predominant PAHs were high molecular weight and the PAH profiles of M. balthica differed from those found in sediment from the same area. The PAHs were both pyrolytic and petrogenic in origin, and a contribution from diesel engines was found, which indicates pollution of the Baltic Sea, most likely caused by the steadily increasing shipping in the area. The HPLC methods developed for hepatic EROD activity and bile metabolite measurements proved to be fast and suitable for the study of biological effects. A mixed function oxygenase enzyme system in Baltic Sea perch collected from the Gulf of Finland was induced slightly: EROD activity in perch varied from 0.30 14 pmol min-1 mg-1 protein. This range can be considered to be comparable to background values. Recent PAH exposure was also indicated by enhanced levels (213 and 1149 ug kg-1) of the bile metabolite 1-hydroxypyrene. No correlation was indicated between hepatic EROD activity and concentration of 1-hydroxypyrene in bile. PCBs and OCPs were observed in Baltic Sea sediment, bivalves and herring. Sums of seven CBs in surface sediment (0 5 cm) ranged from 0.04 to 6.2 ug kg-1 (dw) and sums of three DDTs from 0.13 to 5.0 ug kg-1 (dw). The highest levels of contaminants were found in the most eastern area of the Gulf of Finland where the highest total carbon and nitrogen content was found and where the lowest percentage proportion of p,p -DDT was found. The highest concentrations of CBs and the lowest concentration of DDTs were found in M. balthica from the Gulf of Finland. The highest levels of DDTs were found in M. balthica from the Hanö Bight, which is the outer part of the Bornholm Basin close to the Swedish mainland. In bivalves, the sums of seven CBs were 72 108 ug kg-1 (lw) and the sums of three DDTs were 66 139 ug kg-1 (lw). Results from temporal trend monitoring showed, that during the period 1985 2002, the concentrations of seven CBs in two-year-old female Baltic herring were clearly decreased, from 9 16 to 2 6 ug kg-1 (ww) in the northern Baltic Sea. At the same time, concentrations of three DDTs declined from 8 15 to 1 5 ug kg-1 (ww). The total concentration of the fat-soluble CBs and DDTs in Baltic herring muscle was shown to be age-dependent; the average concentrations in ten-year-old Baltic herring were three to five-fold higher than in two-year-old herring. In Baltic herring and bivalves, as well as in surface sediments, CB 138 and CB153 were predominant among CBs, whereas among DDTs p,p'-DDD predominated in sediment and p,p'-DDE in bivalves and Baltic herring muscle. Baltic Sea sediments are potential sources of contaminants that may become available for bioaccumulation. Based on ecotoxicological assessment criteria, cause for concern regarding CBs in sediments was indicated for the Gulf of Finland and the northern Baltic Proper, and for the northern Baltic Sea regarding CBs in Baltic herring more than two years old. Statistical classification of selected organic contaminants indicated high-level contamination for p,p'-DDT, p,p'-DDD, p,p'-DDE, total DDTs, HCB, CB118 and CB153 in muscle of Baltic herring in age groups two to ten years; in contrast, concentrations of a-HCH and g-HCH were found to be moderate. The concentrations of DDTs and CBs in bivalves is sufficient to cause biological effects, and demonstrates that long-term biological effects are still possible in the case of DDTs in the Hanö Bight.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
Resumo:
This thesis consists of three articles on Orlicz-Sobolev capacities. Capacity is a set function which gives information of the size of sets. Capacity is useful concept in the study of partial differential equations, and generalizations of exponential-type inequalities and Lebesgue point theory, and other topics related to weakly differentiable functions such as functions belonging to some Sobolev space or Orlicz-Sobolev space. In this thesis it is assumed that the defining function of the Orlicz-Sobolev space, the Young function, satisfies certain growth conditions. In the first article, the null sets of two different versions of Orlicz-Sobolev capacity are studied. Sufficient conditions are given so that these two versions of capacity have the same null sets. The importance of having information about null sets lies in the fact that the sets of capacity zero play similar role in the Orlicz-Sobolev space setting as the sets of measure zero do in the Lebesgue space and Orlicz space setting. The second article continues the work of the first article. In this article, it is shown that if a Young function satisfies certain conditions, then two versions of Orlicz-Sobolev capacity have the same null sets for its complementary Young function. In the third article the metric properties of Orlicz-Sobolev capacities are studied. It is usually difficult or impossible to calculate a capacity of a set. In applications it is often useful to have estimates for the Orlicz-Sobolev capacities of balls. Such estimates are obtained in this paper, when the Young function satisfies some growth conditions.
Resumo:
This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.
Resumo:
Many problems in analysis have been solved using the theory of Hodge structures. P. Deligne started to treat these structures in a categorical way. Following him, we introduce the categories of mixed real and complex Hodge structures. Category of mixed Hodge structures over the field of real or complex numbers is a rigid abelian tensor category, and in fact, a neutral Tannakian category. Therefore it is equivalent to the category of representations of an affine group scheme. The direct sums of pure Hodge structures of different weights over real or complex numbers can be realized as a representation of the torus group, whose complex points is the Cartesian product of two punctured complex planes. Mixed Hodge structures turn out to consist of information of a direct sum of pure Hodge structures of different weights and a nilpotent automorphism. Therefore mixed Hodge structures correspond to the representations of certain semidirect product of a nilpotent group and the torus group acting on it.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
Modern Christian theology has been at pain with the schism between the Bible and theology, and between biblical studies and systematic theology. Brevard Springs Childs is one of biblical scholars who attempt to dismiss this “iron curtain” separating the two disciplines. The present thesis aims at analyzing Childs’ concept of theological exegesis in the canonical context. In the present study I employ the method of systematic analysis. The thesis consists of seven chapters. Introduction is the first chapter. The second chapter attempts to find out the most important elements which exercise influence on Childs’ methodology of biblical theology by sketching his academic development during his career. The third chapter attempts to deal with the crucial question why and how the concept of the canon is so important for Childs’ methodology of biblical theology. In chapter four I analyze why and how Childs is dissatisfied with historical-critical scholarship and I point out the differences and similarities between his canonical approach and historical criticism. The fifth chapter attempts at discussing Childs’ central concepts of theological exegesis by investigating whether a Christocentric approach is an appropriate way of creating a unified biblical theology. In the sixth chapter I present a critical evaluation and methodological reflection of Childs’ theological exegesis in the canonical context. The final chapter sums up the key points of Childs’ methodology of biblical theology. The basic results of this thesis are as follows: First, the fundamental elements of Childs’ theological thinking are rooted in Reformed theological tradition and in modern theological neo-orthodoxy and in its most prominent theologian, Karl Barth. The American Biblical Theological Movement and the controversy between Protestant liberalism and conservatism in the modern American context cultivate his theological sensitivity and position. Second, Childs attempts to dismiss negative influences of the historical-critical method by establishing canon-based theological exegesis leading into confessional biblical theology. Childs employs terminology such as canonical intentionality, the wholeness of the canon, the canon as the most appropriate context for doing a biblical theology, and the continuity of the two Testaments, in order to put into effect his canonical program. Childs demonstrates forcefully the inadequacies of the historical-critical method in creating biblical theology in biblical hermeneutics, doctrinal theology, and pastoral practice. His canonical approach endeavors to establish and create post-critical Christian biblical theology, and works within the traditional framework of faith seeking understanding. Third, Childs’ biblical theology has a double task: descriptive and constructive, the former connects biblical theology with exegesis, the later with dogmatic theology. He attempts to use a comprehensive model, which combines a thematic investigation of the essential theological contents of the Bible with a systematic analysis of the contents of the Christian faith. Childs also attempts to unite Old Testament theology and New Testament theology into one unified biblical theology. Fourth, some problematic points of Childs’ thinking need to be mentioned. For instance, his emphasis on the final form of the text of the biblical canon is highly controversial, yet Childs firmly believes in it, he even regards it as the corner stone of his biblical theology. The relationship between the canon and the doctrine of biblical inspiration is weak. He does not clearly define whether Scripture is God’s word or whether it only “witnesses” to it. Childs’ concepts of “the word of God” and “divine revelation” remain unclear, and their ontological status is ambiguous. Childs’ theological exegesis in the canonical context is a new attempt in the modern history of Christian theology. It expresses his sincere effort to create a path for doing biblical theology. Certainly, it was just a modest beginning of a long process.
Resumo:
The dissertation consists of three essays on misplanning wealth and health accumulation. The conventional economics assumes that individual's intertemporal preferences are exponential (exponential preferences, EP). Recent findings in behavioural economics have shown that, actually, people do discount near future relatively heavier than distant future. This implies hyperbolic intertemporal preferences (HP). Essays I and II concentrate especially on the effects of a delayed completion of tasks, a feature of behaviour that HP enables. Essay III uses current Finnish data to analyse the evolvement of the quality adjusted life years (QALYs) and inconsistencies in measuring that. Essay I studies the existence effects of a lucrative retirement savings program (SP) on the retirement savings of different individual types having HP. If the individual does not know that he will have HP also in the future, i.e. he is the naïve, for certain conditions, he delays the enrolment on SP until he abandons it. Very interesting finding is that the naïve retires then poorer in the presence than in the absence of SP. For the same conditions, the individual who knows that he will have HP also in the future, i.e. he is the sophisticated, gains from the existence of SP, and retires with greater retirement savings in the presence than in the absence of SP. Finally, capabilities to learn from past behaviour and about intertemporal preferences improve possibilities to gain from the existence but an adequate time to learn must be then guaranteed. Essay II studies delayed doctor's visits, theirs effects on the costs of a public health care system and government's attempts to control patient behaviour and fund the system. The controlling devices are a consultation fee and a deductible for that. The deductible is effective only for a patient whose diagnosis reveals a disease that would not get cured without the doctor's visit. The naives delay their visits the longest while EP-patients are the quickest visitors. To control the naives, the government should implement a low fee and a high deductible, while for the sophisticates the opposite is true. Finally, if all the types exist in an economy then using an incorrect conventional assumption that all individuals have EP leads to worse situation and requires higher tax rates than assuming incorrectly but unconventionally that only the naives exists. Essay III studies the development of QALYs in Finland 1995/96-2004. The essay concentrates on developing a consistent measure, i.e. independent of discounting, for measuring the age and gender specific QALY-changes and their incidences. For the given time interval, use of a relative change out of an attainable change seems to be almost intact to discounting and reveals that the greatest gains are for older age groups.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
This is an ethnographic study of the lived worlds of the keepers of small shops in a residential neighborhood in Seoul, South Korea. It outlines, discusses, and analyses the categories and conceptualizations of South Korean capitalism at the level of households, neighborhoods, and Korean society. These cultural categories were investigated through the neighborhood shopkeepers practices of work and reciprocal interaction as well as through the shopkeepers articulations of their lived experience. In South Korea, the keepers of small businesses have continued to be a large occupational category despite of societal and economic changes, occupying approximately one fourth of the population in active work force. In spite of that, these people, their livelihoods and their cultural and social worlds have rarely been in the focus of social science inquiry. The ethnographic field research for this study was conducted during a 14-month period between November 1998 and December 1999 and in three subsequent short visits to Korea and to the research neighborhood. The fieldwork was conducted during the aftermath of the Asian currency crisis, colloquially termed at the time as the IMF crisis, which highlighted the social and cultural circumstances of small businesskeeper in a specific way. The livelihoods of small-scale entrepreneurs became even more precarious than before; self-employment became an involuntary choice for many middle-class salaried employees who were laid off; and the cultural categories and concepts of society and economy South Korean capitalism were articulated more sharply than before. This study begins with an overview of the contemporary setting, the Korean society under the socially and economically painful outcomes of the economic crisis, and continues with an overview of relevant literature. After introducing the research area and the informants, I discuss the Korean notion of neighborhood, which incorporates both the notions of culturally valued Koreanness and deficiency in the sense of modernity and development. This study further analyses the ways in which the businesskeepers appropriate and reproduce the Korean ideas of men s and women s gender roles and spheres of work. As the appropriation of children s labor is conditional to intergenerational family trajectories which aim not to reproduce parents occupational status but to gain entry to salaried occupations via educational credentials, the work of a married couple is the most common organization of work in small businesses, to which the Korean ideas of family and kin continuity are not applied. While the lack of generational businesskeeping succession suggests that the proprietors mainly subscribe to the notions of familial status that emanate from the practices of the white-collar middle class, the cases of certain women shopkeepers show that their proprietorship and the ensuing economic standing in the family prompts and invites inversed interpretations and uses of common cultural notions of gender. After discussing and analyzing the concept of money and the cultural categorization of leisure and work, topics that emerged as very significant in the lived world of the shopkeepers, this study charts and analyses the categories of identification which the shopkeepers employ for their cultural and social locations and identities. Particular attention is paid to the idea of ordinary people (seomin), which shopkeepers are commonly considered to be most representative of, and which also sums up the ambivalence of neighborhood shopkeepers as a social category: they are not committed to familial reproduction and continuity of the business but aspire non-entrepreneurial careers for their children, while they occupy a significant position in the elaborations of culturally valued notions and ideologies defining Koreanness such as warmheartedness and sociability.
Resumo:
This contribution focuses on the accelerated loss of traditional sound patterning in music, parallel to the exponential loss of linguistic and cultural variety in a world increasingly 'globalized' by market policies and economic liberalization, in which scientific or technical justification plays a crucial role. As a suggestion to an alternative trend, composers and music theorists are invited to explore the world of design and patterning by grammar rules from non-dominant cultures, and to make an effort to understand their contextual usage and its transformation, in order to appreciate their symbolism and aesthetic depth. Practical examples are provided.
Resumo:
Financial time series tend to behave in a manner that is not directly drawn from a normal distribution. Asymmetries and nonlinearities are usually seen and these characteristics need to be taken into account. To make forecasts and predictions of future return and risk is rather complicated. The existing models for predicting risk are of help to a certain degree, but the complexity in financial time series data makes it difficult. The introduction of nonlinearities and asymmetries for the purpose of better models and forecasts regarding both mean and variance is supported by the essays in this dissertation. Linear and nonlinear models are consequently introduced in this dissertation. The advantages of nonlinear models are that they can take into account asymmetries. Asymmetric patterns usually mean that large negative returns appear more often than positive returns of the same magnitude. This goes hand in hand with the fact that negative returns are associated with higher risk than in the case where positive returns of the same magnitude are observed. The reason why these models are of high importance lies in the ability to make the best possible estimations and predictions of future returns and for predicting risk.
Resumo:
This paper uses the Value-at-Risk approach to define the risk in both long and short trading positions. The investigation is done on some major market indices(Japanese, UK, German and US). The performance of models that takes into account skewness and fat-tails are compared to symmetric models in relation to both the specific model for estimating the variance, and the distribution of the variance estimate used as input in the VaR estimation. The results indicate that more flexible models not necessarily perform better in predicting the VaR forecast; the reason for this is most probably the complexity of these models. A general result is that different methods for estimating the variance are needed for different confidence levels of the VaR, and for the different indices. Also, different models are to be used for the left respectively the right tail of the distribution.