37 resultados para Flare Stars Searching
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Previous scholarship has often maintained that the Gospel of Philip is a collection of Valentinian teachings. In the present study, however, the text is read as a whole and placed into a broader context by searching for parallels from other early Christian texts. Although the Valentinian Christian identity of the Gospel of Philip is not questioned, it is read alongside those texts traditionally labelled as "mainstream Christian". It is obvious from the account of Irenaeus that the boundaries between the Valentinians and other Christians were not as clear or fixed as he probably would have hoped. This study analyzes the Valentinian Christian Gospel of Philip from two points of view: how the text constructs the Christian identity and what kind of Christianity it exemplifies. Firstly, it is observed how the author of the Gospel of Philip places himself and his Christian readers among the early Christianities of the period by emphasizing the common history and Christian features but building especially on particular texts and traditions. Secondly, it is noted how the Christian nature of an individual develops according to the Gospel of Philip. The identity of an individual is built and strengthened through rituals, experiences and teaching. Thirdly, the categorizations, attributes, beliefs and behaviour associated on the one hand with the "insiders", the true Christians, and, on the other, with outsiders in the Gospel of Philip, are analyzed using social identity theory the insiders and outsiders are described through stereotyping in the text. Overall, the study implies that the Gospel of Philip strongly emphasizes spiritual progress and transformation. Rather than depicting the Valentinians as the perfect Christians, it underlines their need for constant change and improvement. Although the author seeks to clearly distinguish the insiders from the outsiders, the boundaries of the categories are in fact fluid in the Gospel of Philip. Outsiders can become insiders and the insiders are also in danger of falling out again.
Resumo:
The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life. For this task it has a rudimentary nervous system. When it finds its spot and takes root, it doesn't need its brain any more so it eats it. It's rather like getting tenure. Daniel C. Dennett (from Consciousness Explained, 1991) The little sea squirt needs its brain for a task that is very simple and short. When the task is completed, the sea squirt starts a new life in a vegetative state, after having a nourishing meal. The little brain is more tightly structured than our massive primate brains. The number of neurons is exact, no leeway in neural proliferation is tolerated. Each neuroblast migrates exactly to the correct position, and only a certain number of connections with the right companions is allowed. In comparison, growth of a mammalian brain is a merry mess. The reason is obvious: Squirt brain needs to perform only a few, predictable functions, before becoming waste. The more mobile and complex mammals engage their brains in tasks requiring quick adaptation and plasticity in a constantly changing environment. Although the regulation of nervous system development varies between species, many regulatory elements remain the same. For example, all multicellular animals possess a collection of proteoglycans (PG); proteins with attached, complex sugar chains called glycosaminoglycans (GAG). In development, PGs participate in the organization of the animal body, like in the construction of parts of the nervous system. The PGs capture water with their GAG chains, forming a biochemically active gel at the surface of the cell, and in the extracellular matrix (ECM). In the nervous system, this gel traps inside it different molecules: growth factors and ECM-associated proteins. They regulate the proliferation of neural stem cells (NSC), guide the migration of neurons, and coordinate the formation of neuronal connections. In this work I have followed the role of two molecules contributing to the complexity of mammalian brain development. N-syndecan is a transmembrane heparan sulfate proteoglycan (HSPG) with cell signaling functions. Heparin-binding growth-associated molecule (HB-GAM) is an ECM-associated protein with high expression in the perinatal nervous system, and high affinity to HS and heparin. N-syndecan is a receptor for several growth factors and for HB-GAM. HB-GAM induces specific signaling via N-syndecan, activating c-Src, calcium/calmodulin-dependent serine protein kinase (CASK) and cortactin. By studying the gene knockouts of HB-GAM and N-syndecan in mice, I have found that HB-GAM and N-syndecan are involved as a receptor-ligand-pair in neural migration and differentiation. HB-GAM competes with the growth factors fibriblast growth factor (FGF)-2 and heparin-binding epidermal growth factor (HB-EGF) in HS-binding, causing NSCs to stop proliferation and to differentiate, and affects HB-EGF-induced EGF receptor (EGFR) signaling in neural cells during migration. N-syndecan signaling affects the motility of young neurons, by boosting EGFR-mediated cell migration. In addition, these two receptors form a complex at the surface of the neurons, probably creating a motility-regulating structure.
Resumo:
The objective of this study was to assess the utility of two subjective facial grading systems, to evaluate the etiologic role of human herpesviruses in peripheral facial palsy (FP), and to explore characteristics of Melkersson-Rosenthal syndrome (MRS). Intrarater repeatability and interrater agreement were assessed for Sunnybrook (SFGS) and House-Brackmann facial grading systems (H-B FGS). Eight video-recorded FP patients were graded in two sittings by 26 doctors. Repeatability for SFGS was from good to excellent and agreement between doctors from moderate to excellent by intraclass correlation coefficient and coefficient of repeatability. For H-B FGS, repeatability was from fair to good and agreement from poor to fair by agreement percentage and kappa coefficients. Because SFGS was at least as good in repeatability as H-B FGS and showed more reliable results in agreement between doctors, we encourage the use of SFGS over H-B FGS. Etiologic role of human herpesviruses in peripheral FP was studied by searching DNA of herpes simplex virus (HSV) -1 and -2, varicella-zoster virus (VZV), human herpesvirus (HHV) -6A, -6B, and -7, Epstein-Barr virus (EBV), and cytomegalovirus (CMV) by PCR/microarray methods in cerebrospinal fluid (CSF) of 33 peripheral FP patients and 36 controls. Three patients and five controls had HHV-6 or -7 DNA in CSF. No DNA of HSV-1 or -2, VZV, EBV, or CMV was found. Detecting HHV-7 and dual HHV-6A and -6B DNA in CSF of FP patients is intriguing, but does not allow etiologic conclusions as such. These DNA findings in association with FP and the other diseases that they accompanied require further exploration. MRS is classically defined as a triad of recurrent labial or oro-facial edema, recurrent peripheral FP, and plicated tongue. All three signs are present in the minority of patients. Edema-dominated forms are more common in the literature, while MRS with FP has received little attention. The etiology and true incidence of MRS are unknown. Characteristics of MRS were evaluated at the Departments of Otorhinolaryngology and Dermatology focusing on patients with FP. There were 35 MRS patients, 20 with FP and they were mailed a questionnaire (17 answered) and were clinically examined (14 patients). At the Department of Otorhinolaryngology, every MRS patient had FP and half had the triad form of MRS. Two patients, whose tissue biopsies were taken during an acute edema episode, revealed nonnecrotizing granulomatous findings typical for MRS, the other without persisting edema and with symptoms for less than a year. A peripheral blood DNA was searched for gene mutations leading to UNC-93B protein deficiency predisposing to HSV-1 infections; no gene mutations were found. Edema in most MRS FP patients did not dominate the clinical picture, and no progression of the disease was observed, contrary to existing knowledge. At the Department of Dermatology, two patients had triad MRS and 15 had monosymptomatic granulomatous cheilitis with frequent or persistent edema and typical MRS tissue histology. The clinical picture of MRS varied according to the department where the patient was treated. More studies from otorhinolaryngology departments and on patients with FP would clarify the actual incidence and clinical picture of the syndrome.
Resumo:
Solar flares were first observed by plain eye in white light by William Carrington in England in 1859. Since then these eruptions in the solar corona have intrigued scientists. It is known that flares influence the space weather experienced by the planets in a multitude of ways, for example by causing aurora borealis. Understanding flares is at the epicentre of human survival in space, as astronauts cannot survive the highly energetic particles associated with large flares in high doses without contracting serious radiation disease symptoms, unless they shield themselves effectively during space missions. Flares may be at the epicentre of man s survival in the past as well: it has been suggested that giant flares might have played a role in exterminating many of the large species on Earth, including dinosaurs. Having said that prebiotic synthesis studies have shown lightning to be a decisive requirement for amino acid synthesis on the primordial Earth. Increased lightning activity could be attributed to space weather, and flares. This thesis studies flares in two ways: in the spectral and the spatial domain. We have extracted solar spectra using three different instruments, namely GOES (Geostationary Operational Environmental Satellite), RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) and XSM (X-ray Solar Monitor) for the same flares. The GOES spectra are low resolution obtained with a gas proportional counter, the RHESSI spectra are higher resolution obtained with Germanium detectors and the XSM spectra are very high resolution observed with a silicon detector. It turns out that the detector technology and response influence the spectra we see substantially, and are important to understanding what conclusions to draw from the data. With imaging data, there was not such a luxury of choice available. We used RHESSI imaging data to observe the spatial size of solar flares. In the present work the focus was primarily on current solar flares. However, we did make use of our improved understanding of solar flares to observe young suns in NGC 2547. The same techniques used with solar monitors were applied with XMM-Newton, a stellar X-ray monitor, and coupled with ground based Halpha observations these techniques yielded estimates for flare parameters in young suns. The material in this thesis is therefore structured from technology to application, covering the full processing path from raw data and detector responses to concrete physical parameter results, such as the first measurement of the length of plasma flare loops in young suns.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Local numerical modelling of magnetoconvection and turbulence - implications for mean-field theories
Resumo:
During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.
Resumo:
New stars in galaxies form in dense, molecular clouds of the interstellar medium. Measuring how the mass is distributed in these clouds is of crucial importance for the current theories of star formation. This is because several open issues in them, such as the strength of different mechanism regulating star formation and the origin of stellar masses, can be addressed using detailed information on the cloud structure. Unfortunately, quantifying the mass distribution in molecular clouds accurately over a wide spatial and dynamical range is a fundamental problem in the modern astrophysics. This thesis presents studies examining the structure of dense molecular clouds and the distribution of mass in them, with the emphasis on nearby clouds that are sites of low-mass star formation. In particular, this thesis concentrates on investigating the mass distributions using the near infrared dust extinction mapping technique. In this technique, the gas column densities towards molecular clouds are determined by examining radiation from the stars that shine through the clouds. In addition, the thesis examines the feasibility of using a similar technique to derive the masses of molecular clouds in nearby external galaxies. The papers presented in this thesis demonstrate how the near infrared dust extinction mapping technique can be used to extract detailed information on the mass distribution in nearby molecular clouds. Furthermore, such information is used to examine characteristics crucial for the star formation in the clouds. Regarding the use of extinction mapping technique in nearby galaxies, the papers of this thesis show that deriving the masses of molecular clouds using the technique suffers from strong biases. However, it is shown that some structural properties can still be examined with the technique.
Resumo:
In this thesis acceleration of energetic particles at collisionless shock waves in space plasmas is studied using numerical simulations, with an emphasis on physical conditions applicable to the solar corona. The thesis consists of four research articles and an introductory part that summarises the main findings reached in the articles and discusses them with respect to theory of diffusive shock acceleration and observations. This thesis gives a brief review of observational properties of solar energetic particles and discusses a few open questions that are currently under active research. For example, in a few large gradual solar energetic particle events the heavy ion abundance ratios and average charge states show characteristics at high energies that are typically associated with flare-accelerated particles, i.e. impulsive events. The role of flare-accelerated particles in these and other gradual events has been discussed a lot in the scientific community, and it has been questioned if and how the observed features can be explained in terms of diffusive shock acceleration at shock waves driven by coronal mass ejections. The most extreme solar energetic particle events are the so-called ground level enhancements where particle receive so high energies that they can penetrate all the way through Earth's atmosphere and increase radiation levels at the surface. It is not known what conditions are required for acceleration into GeV/nuc energies, and the presence of both very fast coronal mass ejections and X-class solar flares makes it difficult to determine what is the role of these two accelerators in ground level enhancements. The theory of diffusive shock acceleration is reviewed and its predictions discussed with respect to the observed particle characteristics. We discuss how shock waves can be modeled and describe in detail the numerical model developed by the author. The main part of this thesis consists of the four scientific articles that are based on results of the numerical shock acceleration model developed by the author. The novel feature of this model is that it can handle complex magnetic geometries which are found, for example, near active regions in the solar corona. We show that, according to our simulations, diffusive shock acceleration can explain the observed variations in abundance ratios and average charge states, provided that suitable seed particles and magnetic geometry are available for the acceleration process in the solar corona. We also derive an injection threshold for diffusive shock acceleration that agrees with our simulation results very well, and which is valid under weakly turbulent conditions. Finally, we show that diffusive shock acceleration can produce GeV/nuc energies under suitable coronal conditions, which include the presence of energetic seed particles, a favourable magnetic geometry, and an enhanced level of ambient turbulence.
Resumo:
The first observations of solar X-rays date back to late 1940 s. In order to observe solar X-rays the instruments have to be lifted above the Earth s atmosphere, since all high energy radiation from the space is almost totally attenuated by it. This is a good thing for all living creatures, but bad for X-ray astronomers. Detectors observing X-ray emission from space must be placed on-board satellites, which makes this particular discipline of astronomy technologically and operationally demanding, as well as very expensive. In this thesis, I have focused on detectors dedicated to observing solar X-rays in the energy range 1-20 keV. The purpose of these detectors was to measure solar X-rays simultaneously with another X-ray spectrometer measuring fluorescence X-ray emission from the Moon surface. The X-ray fluorescence emission is induced by the primary solar X-rays. If the elemental abundances on the Moon were to be determined with fluorescence analysis methods, the shape and intensity of the simultaneous solar X-ray spectrum must be known. The aim of this thesis is to describe the characterization and operation of our X-ray instruments on-board two Moon missions, SMART-1 and Chandrayaan-1. Also the independent solar science performance of these two almost similar X-ray spectrometers is described. These detectors have the following two features in common. Firstly, the primary detection element is made of a single crystal silicon diode. Secondly, the field of view is circular and very large. The data obtained from these detectors are spectra with a 16 second time resolution. Before launching an instrument into space, its performance must be characterized by ground calibrations. The basic operation of these detectors and their ground calibrations are described in detail. Two C-flares are analyzed as examples for introducing the spectral fitting process. The first flare analysis shows the fit of a single spectrum of the C1-flare obtained during the peak phase. The other analysis example shows how to derive the time evolution of fluxes, emission measures (EM) and temperatures through the whole single C4 flare with the time resolution of 16 s. The preparatory data analysis procedures are also introduced in detail. These are required in spectral fittings of the data. A new solar monitor design equipped with a concentrator optics and a moderate size of field of view is also introduced.
Resumo:
Parkinson s disease (PD) is a neurodegenerative disorder associated with a progressive loss of dopaminergic neurons of the substantia nigra (SN). Current therapies of PD do not stop the progression of the disease and the efficacy of these treatments wanes over time. Neurotrophic factors are naturally occurring proteins promoting the survival and differentiation of neurons and the maintenance of neuronal contacts. Neurotrophic factors are attractive candidates for neuroprotective or even neurorestorative treatment of PD. Thus, searching for and characterizing trophic factors are highly important approaches to degenerative diseases. CDNF (cerebral dopamine neurotrophic factor) and MANF (mesencephalic astrocyte-derived neurotrophic factor) are secreted proteins that constitute a novel, evolutionarily conserved neurotrophic factor family expressed in vertebrates and invertebrates. The present study investigated the neuroprotective and restorative effects of human CDNF and MANF in rats with unilateral partial lesion of dopamine neurons by 6-hydroxydopamine (6-OHDA) using both behavioral (amphetamine-induced rotation) and immunohistochemical analyses. We also investigated the distribution and transportation profiles of intrastriatally injected CDNF and MANF in rats. Intrastriatal CDNF and MANF protected nigrostriatal dopaminergic neurons when administered six hours before or four weeks after the neurotoxin 6-OHDA. More importantly, the function of the lesioned nigrostriatal dopaminergic system was partially restored even when the neurotrophic factors were administered four weeks after 6-OHDA. A 14-day continuous infusion of CDNF but not of MANF restored the function of the midbrain neural circuits controlling movement when initiated two weeks after unilateral injection of 6-OHDA. Continuous infusion of CDNF also protected dopaminergic TH-positive cell bodies from toxin-induced degeneration in the substantia nigra pars compacta (SNpc) and fibers in the striatum. When injected into the striatum, CDNF and GDNF had similar transportation profiles from the striatum to the SNpc; thus CDNF may act via the same nerve tracts as GDNF. Intrastriatal MANF was transported to cortical areas which may reflect a mechanism of neurorestorative action that is different from that of CDNF and GDNF. CDNF and MANF were also shown to distribute more readily than GDNF. In conclusion, CDNF and MANF are potential therapeutic proteins for the treatment of PD.
Resumo:
The trees in the Penn Treebank have a standard representation that involves complete balanced bracketing. In this article, an alternative for this standard representation of the tree bank is proposed. The proposed representation for the trees is loss-less, but it reduces the total number of brackets by 28%. This is possible by omitting the redundant pairs of special brackets that encode initial and final embedding, using a technique proposed by Krauwer and des Tombe (1981). In terms of the paired brackets, the maximum nesting depth in sentences decreases by 78%. The 99.9% coverage is achieved with only five non-top levels of paired brackets. The observed shallowness of the reduced bracketing suggests that finite-state based methods for parsing and searching could be a feasible option for tree bank processing.
Resumo:
The trees in the Penn Treebank have a standard representation that involves complete balanced bracketing. In this article, an alternative for this standard representation of the tree bank is proposed. The proposed representation for the trees is loss-less, but it reduces the total number of brackets by 28%. This is possible by omitting the redundant pairs of special brackets that encode initial and final embedding, using a technique proposed by Krauwer and des Tombe (1981). In terms of the paired brackets, the maximum nesting depth in sentences decreases by 78%. The 99.9% coverage is achieved with only five non-top levels of paired brackets. The observed shallowness of the reduced bracketing suggests that finite-state based methods for parsing and searching could be a feasible option for tree bank processing.
Resumo:
A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.
Resumo:
The paradoxical co-existence of conflicting logics governs practices in cultural organizations. This requires ‘balancing acts’ between artistic and managerial efforts, which are often subjects to struggle among the organizational members. This ethnographic study aims to go beyond either-or thinking on the paradoxical organizational context by examining how the organizational members of an opera house construct views on their organization in dialogical meaning-making processes. Various professional groups, dozens of upcoming productions, increased international cooperation, and global competition combined with scarce financial resources make opera houses a complex though interesting context for organization studies. In order to provide a deeper knowledge of the internal dynamics of an opera organization this thesis takes an interpretative view to examine the ways organizational members construct and make sense of their organization. How is the opera organization constructed by the organizational members? How do the members draw on different logics when relating to their organization? Or what are the elements that characterize the relational processes of organizational identity construction in an opera organization? The thesis aims to answer these questions by providing a detailed description of the everyday life of an opera organization and a particular focus put on organizational identity construction. The processes of organizational identity construction are approached from a relational point of view. This may involve various relations between multiple positions, different professional groups, other organizations in the cultural field or between past and present understandings of an organization. The study shows that the construction of an opera organization involves not only the two conflicting logics of art and economy, but also the logic of a national institution. The study suggests also that organizational identities are constructed through processes related to the dialogics of positions, work and management practices. The dialogics involve various struggles through which the organizational members find themselves between the different organizational aspects such as visiting ‘stars’ and an ensemble or between ‘Finnishness’ of opera productions and internationalization. In addition, the study argues that a struggle between different elements is a general mode of relation in cultural organizations and therefore an inherent and enduring aspect in the organizational identity construction. However, the space of ‘being in between’ involves both the enabling and constraining elements in the dialogical identity construction in the context of cultural organizations, which present the struggle in a more generative light.