47 resultados para Minkowski sum
Resumo:
This PhD Thesis is about certain infinite-dimensional Grassmannian manifolds that arise naturally in geometry, representation theory and mathematical physics. From the physics point of view one encounters these infinite-dimensional manifolds when trying to understand the second quantization of fermions. The many particle Hilbert space of the second quantized fermions is called the fermionic Fock space. A typical element of the fermionic Fock space can be thought to be a linear combination of the configurations m particles and n anti-particles . Geometrically the fermionic Fock space can be constructed as holomorphic sections of a certain (dual)determinant line bundle lying over the so called restricted Grassmannian manifold, which is a typical example of an infinite-dimensional Grassmannian manifold one encounters in QFT. The construction should be compared with its well-known finite-dimensional analogue, where one realizes an exterior power of a finite-dimensional vector space as the space of holomorphic sections of a determinant line bundle lying over a finite-dimensional Grassmannian manifold. The connection with infinite-dimensional representation theory stems from the fact that the restricted Grassmannian manifold is an infinite-dimensional homogeneous (Kähler) manifold, i.e. it is of the form G/H where G is a certain infinite-dimensional Lie group and H its subgroup. A central extension of G acts on the total space of the dual determinant line bundle and also on the space its holomorphic sections; thus G admits a (projective) representation on the fermionic Fock space. This construction also induces the so called basic representation for loop groups (of compact groups), which in turn are vitally important in string theory / conformal field theory. The Thesis consists of three chapters: the first chapter is an introduction to the backround material and the other two chapters are individually written research articles. The first article deals in a new way with the well-known question in Yang-Mills theory, when can one lift the action of the gauge transformation group on the space of connection one forms to the total space of the Fock bundle in a compatible way with the second quantized Dirac operator. In general there is an obstruction to this (called the Mickelsson-Faddeev anomaly) and various geometric interpretations for this anomaly, using such things as group extensions and bundle gerbes, have been given earlier. In this work we give a new geometric interpretation for the Faddeev-Mickelsson anomaly in terms of differentiable gerbes (certain sheaves of categories) and central extensions of Lie groupoids. The second research article deals with the question how to define a Dirac-like operator on the restricted Grassmannian manifold, which is an infinite-dimensional space and hence not in the landscape of standard Dirac operator theory. The construction relies heavily on infinite-dimensional representation theory and one of the most technically demanding challenges is to be able to introduce proper normal orderings for certain infinite sums of operators in such a way that all divergences will disappear and the infinite sum will make sense as a well-defined operator acting on a suitable Hilbert space of spinors. This research article was motivated by a more extensive ongoing project to construct twisted K-theory classes in Yang-Mills theory via a Dirac-like operator on the restricted Grassmannian manifold.
Resumo:
Many problems in analysis have been solved using the theory of Hodge structures. P. Deligne started to treat these structures in a categorical way. Following him, we introduce the categories of mixed real and complex Hodge structures. Category of mixed Hodge structures over the field of real or complex numbers is a rigid abelian tensor category, and in fact, a neutral Tannakian category. Therefore it is equivalent to the category of representations of an affine group scheme. The direct sums of pure Hodge structures of different weights over real or complex numbers can be realized as a representation of the torus group, whose complex points is the Cartesian product of two punctured complex planes. Mixed Hodge structures turn out to consist of information of a direct sum of pure Hodge structures of different weights and a nilpotent automorphism. Therefore mixed Hodge structures correspond to the representations of certain semidirect product of a nilpotent group and the torus group acting on it.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
The occurrence and nature of civilian firearm- and explosion-injuries in Finland, and the nature of severe gunshot injuries of the extremities were described in seven original articles. The main data sources used were the National Hospital Discharge Register, the Cause-of-Death Register, and the Archive of Death Certificates at Statistics Finland. The present study was population based. Epidemiologic methods were used in six and clinical analyses in five papers. In these clinical studies, every original hospital record and death certificate was critically analyzed. The trend of hospitalized firearm injuries has slightly declined in Finland from the late 1980s to the early 2000s. The occurrence decreased from 5.1 per 100 000 person-years in 1990 to 2.6 in 2003. The decline was found in the unintentional firearm injuries. A high incidence of unintentional injuries by firearms was characteristic of the country, while violence and homicides by firearms represented a minor problem. The incidence of fatal non-suicidal firearm injuries has been stable, 1.8 cases per 100 000 person-years. Suicides using firearms were eight times more common during the period studied. This is contrary to corresponding reports from many other countries. However, the use of alcohol and illegal drugs or substances was detected in as many as one-third of the injuries studied. The median length of hospitalization was three days and it was significantly associated (p<0.001) with the type of injury. The mean length of hospital stay has decreased from the 1980s to the early 2000s. In this study, there was a special interest in gunshot injuries of the extremities. From a clinical point of view, the nature of severe extremital gunshot wounds, as well as the primary operative approach in their management, varied. The patients with severe injuries of this kind were managed at university and central hospital emergency departments, by general surgeons in smaller hospitals and by cardiothoracic or vascular surgeons in larger hospitals. Injuries were rarities and as such challenges for surgeons on call. Some noteworthy aspects of the management were noticed and these should be focused on in the future. On the other hand, the small population density and the relatively large geographic area of Finland do not favor high volume, centralized trauma management systems. However, experimental war surgery has been increasingly taught in the country from the 1990s, and excellent results could be expected during the present decade. Epidemiologically, explosion injuries can be considered a minor problem in Finland at present, but their significance should not be underestimated. Fatal explosion injuries showed up sporadically. An increase occurred from 2002 to 2004 for no obvius reason. However, in view of the historical facts, a possibility for another rare major explosion involving several people might become likely within the next decade. The national control system of firearms is mainly based on the new legislations from 1998 and 2002. However, as shown in this study, there is no reason to assume that the national hospitalization policies, or the political climate, or the legislation might have changed over the study period and influenced the declining development, at least not directly. Indeed, the reason for the decline to appear in the incidence of unintentional injuries only remains unclear. It may derive from many practical steps, e.g. locked firearm cases, or from the stability of the community itself. For effective reduction of firearm-related injuries, preventive measures, such as education and counseling, should be targeted at recreational firearm users. To sum up, this study showed that the often reported increasing trend in firearm as well as explosion-related injuries has not manifested in Finland. Consequently, it can be recognized that, overall, the Finnish legislation together with the various strategies have succeeded in preventing firearm- and explosion-related injuries in the country.
Resumo:
Intensive care is to be provided to patients benefiting from it, in an ethical, efficient, effective and cost-effective manner. This implies a long-term qualitative and quantitative analysis of intensive care procedures and related resources. The study population consists of 2709 patients treated in the general intensive care unit (ICU) of Helsinki University Hospital. Study sectors investigate intensive care patients mortality, quality of life (QOL), Quality-Adjusted Life-Years (QALY units) and factors related to severity of illness, length of stay (LOS), patient s age, evaluation period as well as experiences and memories connected with the ICU episode. In addition, the study examines the qualities of two QOL measures, the RAND 36 Item Health Survey 1.0 (RAND-36) and the 5 Item EuroQol-5D (EQ-5D) and assesses the correlation of the test results. Patients treated in 1995 responded to the RAND-36 questionnaire in 1996. All patients, treated from 1995-2000, received a QOL questionnaires in 2001, when 1 7 years had lapsed from the intensive treatment. Response rate was 79.5 %. Main Results 1) Of the patients who died within the first year (n = 1047) 66 % died during the intensive care period or within the following month. The non-survivors were more aged than the surviving patients, had generally a higher than average APACHE II and SOFA score depicting the severity of illness, their ICU LOS was longer and hospital stay shorter than of the surviving patients (p < 0.001). Mortality of patients receiving conservative treatment was higher than of those receiving surgical treatment. Patients replying to the QOL survey in 2001 (n = 1099) had recovered well: 97 % of those lived at home. More than half considered their QOL as good or extremely good, 40 % as satisfactory and 7 % as bad. All QOL indexes of those of working-age were considerably lower (p < 0.001) than comparable figures of the age- and gender-adjusted Finnish population. The 5-year monitoring period made evident that mental recovery was slower than physical recovery. 2) The results of RAND-36 and EQ-5D correlated well (p < 0.01). The RAND-36 profile measure distinguished more clearly between the different categories of QOL and their levels. EQ-5D measured well the patient groups general QOL and the sum index was used to calculate QALY units. 3) QALY units were calculated by multiplying the time the patient survived after ICU stay or expected life-years by the EQ-5D sum index. Aging automatically lowers the number of QALY units. Patients under the age of 65 receiving conservative treatment benefited from treatment to a greater extent measured in QALY units than their peers receiving surgical treatment, but in the age group 65 and over patients with surgical treatment received higher QALY ratings than recipients of conservative treatment. 4) The intensive care experience and QOL ratings were connected. The QOL indices were statistically highest for those recipients with memories of intensive care as a positive experience, albeit their illness requiring intensive care treatment was less serious than average. No statistically significant differences were found in the QOL indices of those with negative memories, no memories or those who did not express the quality of their experiences.
Resumo:
Stroke is the second leading cause of death and the leading cause of disability worldwide. Of all strokes, up to 80% to 85% are ischemic, and of these, less than 10% occur in young individuals. Stroke in young adults—most often defined as stroke occurring under the age of 45 or 50—can be particularly devastating due to long expected life-span ahead and marked socio-economic consequences. Current basic knowledge on ischemic stroke in this age group originates mostly from rather small and imprecise patient series. Regarding emergency treatment, systematic data on use of intravenous thrombolysis are absent. For this Thesis project, we collected detailed clinical and radiological data on all consecutive patients aged 15 to 49 with first-ever ischemic stroke between 1994 and 2007 treated at the Helsinki University Central Hospital. The aims of the study were to define demographic characteristics, risk factors, imaging features, etiology, and long-term mortality and its predictors in this patient population. We additionally sought to investigate, whether intravenous thrombolysis is safe and beneficial for the treatment of acute ischemic stroke in the young. Of our 1008 patients, most were males (ratio 1.7:1), who clearly outnumbered females after the age of 44, but females were preponderant among those aged <30. Occurrence increased exponentially. The most frequent risk factors were dyslipidemia (60%), smoking (44%), and hypertension (39%). Risk factors accumulated in males and along aging. Cardioembolism (20%) and cervicocerebral artery dissection (15%) were the most frequent etiologic subgroups, followed by small-vessel disease (14%), and large-artery atherosclerosis (8%). A total of 33% had undetermined etiology. Left hemisphere strokes were more common in general. Posterior circulation infarcts were more common among those aged <45. Multiple brain infarcts were present in 23% of our patients, 13% had silent infarcts, and 5% had leukoaraiosis. Of those with silent brain infarcts, majority (54%) had only a single lesion, and most of the silent strokes were located in basal ganglia (39%) and subcortical regions (21%). In a logistic regression analysis, type 1 diabetes mellitus in particular predicted the presence of both silent brain infarcts (odds ratio 5.78, 95% confidence interval 2.37-14.10) and leukoaraiosis (9.75; 3.39-28.04). We identified 48 young patients with hemispheric ischemic stroke treated with intravenous tissue plasminogen activator, alteplase. For comparisons, we searched 96 untreated control patients matched by age, gender, and admission stroke severity, as well as 96 alteplase-treated older controls aged 50 to 79 matched by gender and stroke severity. Alteplase-treated young patients recovered more often completely (27% versus 10%, P=0.010) or had only mild residual symptoms (40% versus 22%, P=0.025) compared to age-matched controls. None of the alteplase-treated young patients had symptomatic intracerebral hemorrhage or died within 3-month follow-up. Overall long-term mortality was low in our patient population. Cumulative mortality risks were 2.7% (95% confidence interval 1.5-3.9%) at 1 month, 4.7% (3.1-6.3%) at 1 year, and 10.7% (9.9-11.5%) at 5 years. Among the 30-day survivors who died during the 5-year follow-up, more than half died due to vascular causes. Malignancy, heart failure, heavy drinking, preceding infection, type 1 diabetes, increasing age, and large-artery atherosclerosis causing the index stroke independently predicted 5-year mortality when adjusted for age, gender, relevant risk factors, stroke severity, and etiologic subtype. In sum, young adults with ischemic stroke have distinct demographic patterns and they frequently harbor traditional vascular risk factors. Etiology in the young is extremely diverse, but in as many as one-third the exact cause remains unknown. Silent brain infarcts and leukoaraiosis are not uncommon brain imaging findings in these patients and should not be overlooked due to their potential prognostic relevance. Outcomes in young adults with hemispheric ischemic stroke can safely be improved with intravenous thrombolysis. Furthermore, despite their overall low risk of death after ischemic stroke, several easily recognizable factors—of which most are modifiable—predict higher mortality in the long term in young adults.
Resumo:
Stroke, ischemic or hemorrhagic, belongs among the foremost causes of death and disability worldwide. Massive brain swelling is the leading cause of death in large hemispheric strokes and is only modestly alleviated by available treatment. Thrombolysis with tissue plasminogen activator (TPA) is the only approved therapy in acute ischemic stroke, but fear of TPA-mediated hemorrhage is often a reason for withholding this otherwise beneficial treatment. In addition, recanalization of the occluded artery (spontaneously or with thrombolysis) may cause reperfusion injury by promoting brain edema, hemorrhage, and inflammatory cell infiltration. A dominant event underlying these phenomena seems to be disruption of the blood-brain barrier (BBB). In contrast to ischemic stroke, no widely approved clinical therapy exists for intracerebral hemorrhage (ICH), which is associated with poor outcome mainly due to the mass effect of enlarging hematoma and associated brain swelling. Mast cells (MCs) are perivascularly located resident inflammatory cells which contain potent vasoactive, proteolytic, and fibrinolytic substances in their cytoplasmic granules. Experiments from our laboratory showed MC density and their state of granulation to be altered early following focal transient cerebral ischemia, and degranulating MCs were associated with perivascular edema and hemorrhage. (I) Pharmacological MC stabilization led to significantly reduced ischemic brain swelling (40%) and BBB leakage (50%), whereas pharmacological MC degranulation raised these by 90% and 50%, respectively. Pharmacological MC stabilization also revealed a 40% reduction in neutrophil infiltration. Moreover, genetic MC deficiency was associated with an almost 60% reduction in brain swelling, 50% reduction in BBB leakage, and 50% less neutrophil infiltration, compared with controls. (II) TPA induced MC degranulation in vitro. In vivo experiments with post-ischemic TPA administration demonstrated 70- to 100-fold increases in hemorrhage formation (HF) compared with controls HF. HF was significantly reduced by pharmacological MC stabilization at 3 (95%), 6 (75%), and 24 hours (95%) of follow-up. Genetic MC deficiency again supported the role of MCs, leading to 90% reduction in HF at 6 and 24 hours. Pharmacological MC stabilization and genetic MC deficiency were also associated with significant reduction in brain swelling and in neutrophil infiltration. Importantly, these effects translated into a significantly better neurological outcome and lower mortality after 24 hours. (III) Finally, in ICH experiments, pharmacological MC stabilization resulted in significantly less brain swelling, diminished growth in hematoma volume, better neurological scores, and decreased mortality. Pharmacological MC degranulation produced the opposite effects. Genetic MC deficiency revealed a beneficial effect similar to that found with pharmacological MC stabilization. In sum, the role of MCs in these clinically relevant scenarios is supported by a series of experiments performed both in vitro and in vivo. That not only genetic MC deficiency but also drugs targeting MCs could modulate these parameters (translated into better outcome and decreased mortality), suggests a potential therapeutic approach in a number of highly prevalent cerebral insults in which extensive tissue injury is followed by dangerous brain swelling and inflammatory cell infiltration. Furthermore, these experiments could hint at a novel therapy to improve the safety of thrombolytics, and a potential cellular target for those seeking novel forms of treatment for ICH.
Resumo:
Volatile organic compounds (VOCs) affect atmospheric chemistry and thereafter also participate in the climate change in many ways. The long-lived greenhouse gases and tropospheric ozone are the most important radiative forcing components warming the climate, while aerosols are the most important cooling component. VOCs can have warming effects on the climate: they participate in tropospheric ozone formation and compete for oxidants with the greenhouse gases thus, for example, lengthening the atmospheric lifetime of methane. Some VOCs, on the other hand, cool the atmosphere by taking part in the formation of aerosol particles. Some VOCs, in addition, have direct health effects, such as carcinogenic benzene. VOCs are emitted into the atmosphere in various processes. Primary emissions of VOC include biogenic emissions from vegetation, biomass burning and human activities. VOCs are also produced in secondary emissions from the reactions of other organic compounds. Globally, forests are the largest source of VOC entering the atmosphere. This thesis focuses on the measurement results of emissions and concentrations of VOCs in one of the largest vegetation zones in the world, the boreal zone. An automated sampling system was designed and built for continuous VOC concentration and emission measurements with a proton transfer reaction - mass spectrometer (PTR-MS). The system measured one hour at a time in three-hourly cycles: 1) ambient volume mixing-ratios of VOCs in the Scots-pine-dominated boreal forest, 2) VOC fluxes above the canopy, and 3) VOC emissions from Scots pine shoots. In addition to the online PTR-MS measurements, we determined the composition and seasonality of the VOC emissions from a Siberian larch with adsorbent samples and GC-MS analysis. The VOC emissions from Siberian larch were reported for the fist time in the literature. The VOC emissions were 90% monoterpenes (mainly sabinene) and the rest sesquiterpenes (mainly a-farnesene). The normalized monoterpene emission potentials were highest in late summer, rising again in late autumn. The normalized sesquiterpene emission potentials were also highest in late summer, but decreased towards the autumn. The emissions of mono- and sesquiterpenes from the deciduous Siberian larch, as well as the emissions of monoterpenes measured from the evergreen Scots pine, were well described by the temperature-dependent algorithm. In the Scots-pine-dominated forest, canopy-scale emissions of monoterpenes and oxygenated VOCs (OVOCs) were of the same magnitude. Methanol and acetone were the most abundant OVOCs emitted from the forest and also in the ambient air. Annually, methanol and mixing ratios were of the order of 1 ppbv. The monoterpene and sum of isoprene 2-methyl-3-buten-2-ol (MBO) volume mixing-ratios were an order of magnitude lower. The majority of the monoterpene and methanol emissions from the Scots-pinedominated forest were explained by emissions from Scots pine shoots. The VOCs were divided into three classes based on the dynamics of the summer-time concentrations: 1) reactive compounds with local biological, anthropogenic or chemical sources (methanol, acetone, butanol and hexanal), 2) compounds whose emissions are only temperaturedependent (monoterpenes), 3) long-lived compounds (benzene, acetaldehyde). Biogenic VOC (methanol, acetone, isoprene MBO and monoterpene) volume mixing-ratios had clear diurnal patterns during summer. The ambient mixing ratios of other VOCs did not show this behaviour. During winter we did not observe systematical diurnal cycles for any of the VOCs. Different sources, removal processes and turbulent mixing explained the dynamics of the measured mixing-ratios qualitatively. However, quantitative understanding will require longterm emission measurements of the OVOCs and the use of comprehensive chemistry models. Keywords: Hydrocarbons, VOC, fluxes, volume mixing-ratio, boreal forest
Resumo:
The doctoral dissertation, entitled Siperiaa sanoiksi - uralilaisuutta teoiksi. Kai Donner poliittisena organisaattorina sekä tiedemiehenä antropologian näkökulmasta clarifies the early history of anthropological fieldwork and research in Siberia. The object of research is Kai Donner (1888-1935), fieldworker, explorer and researcher of Finno-Ugric languages, who made two expeditions to Siberia during 1911-1913 and 1914. Donner studied in Cambridge in 1909 under the guidance of James Frazer, A. C. Haddon and W. H. R. Rivers - and with Bronislaw Malinowski. After finishing his expeditions, Donner organized the enlistment of Finnish university students to receive military training in Germany. He was exiled and participated in the struggle for Finnish independence. After that, he organized military offensives in Russia and participated in domestic politics and policy in cooperation with C. G. E. Mannerheim. He also wrote four ethnographic descriptions on Siberia and worked with the Scandinavian Arctic areas researchers and Polar explorers. The results of this analysis can be sum up as follows: In the history of ethnographic research in Finland, it is possible to find two types of fieldwork tradition. The first tradition started from M. A. Castrén's explorations and research and the second one from August Ahlqvist's. Donner can be included in the first group with Castrén and Sakari Pälsi, unlike other contemporary philologists, or cultural researcher colleagues, which used the method of August Ahlqvist. Donner's holistic, lively and participant-observation based way of work is articulated in his writings two years before Malinowski published his thesis about modern fieldwork. Unfortunately, Donner didn't get the change to continue his researche because of the civil war in Finland, and due to the dogmatic position of E. N. Setälä. Donner's main work - the ethnohistorical Siberia - encloses his political and anthropological visions about a common and threatened Uralic nation under the pressure of Russian. The important items of his expeditions can be found in the area of cultural ecology, nutritional anthropology and fieldwork methods. It is also possible to prove that in his short stories from Siberia, there can be found some psychological factors that correlate his early life history.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
The goals of this study were to analyze the forms of emotional tendencies that are likely to motivate moral behaviors, and to find correlates for these tendencies. In study 1, students narratives of their own guilt or shame experiences were analyzed. The results showed that pure shame was more likely to motivate avoidance than reparation, whereas guilt and combination of guilt and shame were likely to motivate reparation. However, all types of emotion could lead to chronic rumination if the person was not clearly responsible for the situation. In study 2, the relations of empathy with two measures of guilt were examined in a sample of 13- to 16-year-olds (N=113). Empathy was measured using Davis s IRI and guilt by Tangney s TOSCA and Hoffman s semi-projective story completion method that includes two different scenarios, guilt over cheating and guilt over inaction. Empathy correlated more strongly with both measures of guilt than the two measures correlated with each other. Hoffman s guilt over inaction was more strongly associated with empathy measures in girls than in boys, whereas for guilt over cheating the pattern was the opposite. Girls and boys who describe themselves as empathetic may emphasize different aspect of morality and feel guilty in different contexts. In study 3, cultural and gender differences in guilt and shame (TOSCA) and value priorities (the Schwartz Value Survey) were studied in samples of Finnish (N=156) and Peruvian (N=159) adolescents. Gender differences were found to be larger and more stereotypical among the Finns than among the Peruvians. Finnish girls were more prone to guilt and shame than boys were, whereas among the Peruvians there was no gender difference in guilt, and boys were more shame-prone than girls. The results support the view that psychological gender differences are largest individualistic societies. In study 4, the relations of value priorities to guilt, shame and empathy were examined in two samples, one of 15 19-year-old high school students (N = 207), and the other of military conscripts (N = 503). Guilt was, in both samples, positively related to valuing universalism, benevolence, tradition, and conformity, and negatively related to valuing power, hedonism, stimulation, and self-direction. The results for empathy were similar, but the relation to the openness conservation value dimension was weaker. Shame and personal distress were weakly related to values. In sum, shame without guilt and the TOSCA shame scale are tendencies that are unlikely to motivate moral behavior in Finnish cultural context. Guilt is likely to be connected to positive social behaviors, but excessive guilt can cause psychological problems. Moral emotional tendencies are related to culture, cultural conceptions of gender and to individual value priorities.
Resumo:
We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.
Resumo:
We report a set of measurements of particle production in inelastic pbar{p} collisions collected with a minimum-bias trigger at the Tevatron Collider with the CDF II experiment. The inclusive charged particle transverse momentum differential cross section is measured, with improved precision, over a range about ten times wider than in previous measurements. The former modeling of the spectrum appears to be incompatible with the high particle momenta observed. The dependence of the charged particle transverse momentum on the event particle multiplicity is analyzed to study the various components of hadron interactions. This is one of the observable variables most poorly reproduced by the available Monte Carlo generators. A first measurement of the event transverse energy sum differential cross section is also reported. A comparison with a Pythia prediction at the hadron level is performed. The inclusive charged particle differential production cross section is fairly well reproduced only in the transverse momentum range available from previous measurements. At higher momentum the agreement is poor. The transverse energy sum is poorly reproduced over the whole spectrum. The dependence of the charged particle transverse momentum on the particle multiplicity needs the introduction of more sophisticated particle production mechanisms, such as multiple parton interactions, in order to be better explained.