44 resultados para torso segment masses
em Helda - Digital Repository of University of Helsinki
Resumo:
The topic of this study is the most renowned anthology of essays written in Literary Chinese, Guwen guanzhi, compiled and edited by Wu Chengquan (Chucai) and Wu Dazhi (Diaohou), and first published during the Qing dynasty, in 1695. Because of the low social standing of the compilers, their anthology remained outside the recommended study materials produced by members of the established literati and used for preparing students in the imperial civil-service examinations. However, since the end of the imperial era, Guwen guanzhi has risen to a position as the classical anthology par excellence. Today it is widely used as required or supplementary reading material of Literary Chinese in middle-schools both in Mainland China and on Taiwan. The goal of this study is to explain the persistent longevity of the anthology. So far, Guwen guanzhi has not been a topic of any published academic study, and the opinions expressed on it in various sources are widely discrepant. Through a comparative study with a dozen classical Chinese anthologies in use during the early Qing dynasty, this study reveals the extent to which the compilers of Guwen guanzhi modelled their work after other selections. Altogether 86 % of the texts in Guwen guanzhi originate from another Qing era anthology, Guwen xiyi, often copied character by character. However, the notes and commentaries are all different. Concentrating on the special characteristics unique to Guwen guanzhi—the commentaries and certain peculiarities in the selection of texts—this study then discusses the possible reasons for the popularity of Guwen guanzhi over the competing readers during the Qing era. Most remarkably, Guwen guanzhi put in practise the equalitarian, educational ideals of the Ming philosopher Wang Shouren (Yangming). Thus Guwen guanzhi suited the self-enlightenment needs of the ”subordinate classes”, in particular the rising middle-class comprised mainly of merchants. The lack of moral teleology, together with the compact size, relative comprehensiveness of the selection and good notes and comments, have made Guwen guanzhi well suited for the new society since the abolition of the imperial examination system. Through a content analysis, based on a sample of the texts, this study measures the relative emphasis on centralism and localism (both in concrete and spiritual terms) expressed in the texts of Guwen guanzhi. The analysis shows that the texts manifest some bias towards emphasising innate virtue on the expense of state-defined moral. This may reflect hidden critique towards intellectual oppression by the centralised imperial rule. During the early decades of the Qing era, such critique was often linked to Ming-loyalism. Finally, this study concludes that the kind of ”spiritual localism” that Guwen guanzhi manifests gives it the potential to undermine monolithic orthodoxy even in today’s Chinese societies. This study has progressed hand in hand with the translation of a selection of texts from Guwen guanzhi into Finnish, published by Gaudeamus Helsinki University Press: Jadekasvot – Valittuja tarinoita Kiinan muinaisajoilta (2005), Jadelähde – Valittuja kirjoituksia Kiinan keskiajalta (2007) and Jadepeili – Valittuja kirjoituksia keisarillisen Kiinan kulta-ajoilta (2008). All translations are critical editions, complete with extensive notation. The trilogy is the first comprehensive translation based on Guwen guanzhi in a European language.
Resumo:
The purpose of this study was to deepen the understanding of market segmentation theory by studying the evolution of the concept and by identifying the antecedents and consequences of the theory. The research method was influenced by content analysis and meta-analysis. The evolution of market segmentation theory was studied as a reflection of evolution of marketing theory. According to this study, the theory of market segmentation has its roots in microeconomics and it has been influenced by different disciplines, such as motivation research and buyer behaviour theory. Furthermore, this study suggests that the evolution of market segmentation theory can be divided into four major eras: the era of foundations, development and blossoming, stillness and stagnation, and the era of re-emergence. Market segmentation theory emerged in the mid-1950’s and flourished during the period between mid-1950’s and the late 1970’s. During the 1980’s the theory lost its interest in the scientific community and no significant contributions were made. Now, towards the dawn of the new millennium, new approaches have emerged and market segmentation has gained new attention.
Resumo:
This study is a pragmatic description of the evolution of the genre of English witchcraft pamphlets from the mid-sixteenth century to the end of the seventeenth century. Witchcraft pamphlets were produced for a new kind of readership semi-literate, uneducated masses and the central hypothesis of this study is that publishing for the masses entailed rethinking the ways of writing and printing texts. Analysis of the use of typographical variation and illustrations indicates how printers and publishers catered to the tastes and expectations of this new audience. Analysis of the language of witchcraft pamphlets shows how pamphlet writers took into account the new readership by transforming formal written source materials trial proceedings into more immediate ways of writing. The material for this study comes from the Corpus of Early Modern English Witchcraft Pamphlets, which has been compiled by the author. The multidisciplinary analysis incorporates both visual and linguistic aspects of the texts, with methodologies and theoretical insights adopted eclectically from historical pragmatics, genre studies, book history, corpus linguistics, systemic functional linguistics and cognitive psychology. The findings are anchored in the socio-historical context of early modern publishing, reading, literacy and witchcraft beliefs. The study shows not only how consideration of a new audience by both authors and printers influenced the development of a genre, but also the value of combining visual and linguistic features in pragmatic analyses of texts.
Resumo:
The common focus of the studies brought together in this work is the prosodic segmentation of spontaneous speech. The theoretically most central aspect is the introduction and further development of the IJ-model of intonational chunking. The study consists of a general introduction and five detailed studies that approach prosodic chunking from different perspectives. The data consist of recordings of face-to-face interaction in several spoken varieties of Finnish and Finland Swedish; the methodology is usage-based and qualitative. The term “speech prosody” refers primarily to the melodic and rhythmic characteristics of speech. Both speaking and understanding speech require the ability to segment the flow of speech into suitably sized prosodic chunks. In order to be usage-based, a study of spontaneous speech consequently needs to be based on material that is segmented into prosodic chunks of various sizes. The segmentation is seen to form a hierarchy of chunking. The prosodic models that have so far been developed and employed in Finland have been based on sentences read aloud, which has made it difficult to apply these models in the analysis of spontaneous speech. The prosodic segmentation of spontaneous speech has not previously been studied in detail in Finland. This research focuses mainly on the following three questions: (1) What are the factors that need to be considered when developing a model of prosodic segmentation of speech, so that the model can be employed regardless of the language or dialect under analysis? (2) What are the characteristics of a prosodic chunk, and what are the similarities in the ways chunks of different languages and varieties manifest themselves that will make it possible to analyze different data according to the same criteria? (3) How does the IJ-model of intonational chunking introduced as a solution to question (1) function in practice in the study of different varieties of Finnish and Finland Swedish? The boundaries of the prosodic chunks were manually marked in the material according to context-specific acoustic and auditory criteria. On the basis of the data analyzed, the IJ-model was further elaborated and implemented, thus allowing comparisons between different language varieties. On the basis of the empirical comparisons, a prosodic typology is presented for the dialects of Swedish in Finland. The general contention is that the principles of the IJ-model can readily be used as a methodological tool for prosodic analysis irrespective of language varieties.
Resumo:
The mitochondrion is an organelle of outmost importance, and the mitochondrial network performs an array of functions that go well beyond ATP synthesis. Defects in mitochondrial performance lead to diseases, often affecting nervous system and muscle. Although many of these mitochondrial diseases have been linked to defects in specific genes, the molecular mechanisms underlying the pathologies remain unclear. The work in this thesis aims to determine how defects in mitochondria are communicated within - and interpreted by - the cells, and how this contributes to disease phenotypes. Fumarate hydratase (FH) is an enzyme of the citrate cycle. Recessive defects in FH lead to infantile mitochondrial encephalopathies, while dominant mutations predispose to tumor formation. Defects in succinate dehydrogenase (SDH), the enzyme that precedes FH in the citrate cycle, have also been described. Mutations in SDH subunits SDHB, SDHC and SDHD are associated with tumor predisposition, while mutations in SDHA lead to a characteristic mitochondrial encephalopathy of childhood. Thus, the citrate cycle, via FH and SDH, seems to have essential roles in mitochondrial function, as well as in the regulation of processes such as cell proliferation, differentiation or death. Tumor predisposition is not a typical feature of mitochondrial energy deficiency diseases. However, defects in citrate cycle enzymes also affect mitochondrial energy metabolism. It is therefore necessary to distinguish what is specific for defects in citrate cycle, and thus possibly associated with the tumor phenotype, from the generic consequences of defects in mitochondrial aerobic metabolism. We used primary fibroblasts from patients with recessive FH defects to study the cellular consequences of FH-deficiency (FH-). Similarly to the tumors observed in FH- patients, these fibroblasts have very low FH activity. The use of primary cells has the advantage that they are diploid, in contrast with the aneuploid tumor cells, thereby enabling the study of the early consequences of FH- in diploid background, before tumorigenesis and aneuploidy. To distinguish the specific consequences of FH- from typical consequences of defects in mitochondrial aerobic metabolism, we used primary fibroblasts from patients with MELAS (mitochondrial encephalopathy with lactic acidosis and stroke-like episodes) and from patients with NARP (neuropathy, ataxia and retinitis pigmentosa). These diseases also affect mitochondrial aerobic metabolism but are not known to predispose to tumor formation. To study in vivo the systemic consequences of defects in mitochondrial aerobic metabolism, we used a transgenic mouse model of late-onset mitochondrial myopathy. The mouse contains a transgene with an in-frame duplication of a segment of Twinkle, the mitochondrial replicative helicase, whose defects underlie the human disease progressive external ophthalmoplegia. This mouse model replicates the phenotype in the patients, particularly neuronal degeneration, mitochondrial myopathy, and subtle decrease of respiratory chain activity associated with mtDNA deletions. Due to the accumulation of mtDNA deletions, the mouse was named deletor. We first studied the consequences of FH- and of respiratory chain defects for energy metabolism in primary fibroblasts. To further characterize the effects of FH- and respiratory chain malfunction in primary fibroblasts at transcriptional level, we used expression microarrays. In order to understand the in vivo consequences of respiratory chain defects in vivo, we also studied the transcriptional consequences of Twinkle defects in deletor mice skeletal muscle, cerebellum and hippocampus. Fumarate accumulated in the FH- homozygous cells, but not in the compound heterozygous lines. However, virtually all FH- lines lacked cytoplasmic FH. Induction of glycolysis was common to FH-, MELAS and NARP fibroblasts. In deletor muscle glycolysis seemed to be upregulated. This was in contrast with deletor cerebellum and hippocampus, where mitochondrial biogenesis was in progress. Despite sharing a glycolytic pattern in energy metabolism, FH- and respiratory chain defects led to opposite consequences in redox environment. FH- was associated with reduced redox environment, while MELAS and NARP displayed evidences of oxidative stress. The deletor cerebellum had transcriptional induction of antioxidant defenses, suggesting increased production of reactive oxygen species. Since the fibroblasts do not represent the tissues where the tumors appear in FH- patients, we compared the fibroblast array data with the data from FH- leiomyomas and normal myometrium. This allowed the determination of the pathways and networks affected by FH-deficiency in primary cells that are also relevant for myoma formation. A key pathway regulating smooth muscle differentiation, SRF (serum response factor)-FOS-JUNB, was found to be downregulated in FH- cells and in myomas. While in the deletor mouse many pathways were affected in a tissue-specific basis, like FGF21 induction in the deletor muscle, others were systemic, such as the downregulation of ALAS2-linked heme synthesis in all deletor tissues analyzed. However, interestingly, even a tissue-specific response of FGF21 excretion could elicit a global starvation response. The work presented in this thesis has contributed to a better understanding of mitochondrial stress signalling and of pathways interpreting and transducing it to human pathology.
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
Glaucoma is the second leading cause of blindness worldwide. It is a group of optic neuropathies, characterized by progressive optic nerve degeneration, excavation of the optic disc due to apoptosis of retinal ganglion cells and corresponding visual field defects. Open angle glaucoma (OAG) is a subtype of glaucoma, classified according to the age of onset into juvenile and adult- forms with a cut-off point of 40 years of age. The prevalence of OAG is 1-2% of the population over 40 years and increases with age. During the last decade several candidate loci and three candidate genes, myocilin (MYOC), optineurin (OPTN) and WD40-repeat 36 (WDR36), for OAG have been identified. Exfoliation syndrome (XFS), age, elevated intraocular pressure and genetic predisposition are known risk factors for OAG. XFS is characterized by accumulation of grayish scales of fibrillogranular extracellular material in the anterior segment of the eye. XFS is overall the most common identifiable cause of glaucoma (exfoliation glaucoma, XFG). In the past year, three single nucleotide polymorphisms (SNPs) on the lysyl oxidase like 1 (LOXL1) gene have been associated with XFS and XFG in several populations. This thesis describes the first molecular genetic studies of OAG and XFS/XFG in the Finnish population. The role of the MYOC and OPTN genes and fourteen candidate loci was investigated in eight Finnish glaucoma families. Both candidate genes and loci were excluded in families, further confirming the heterogeneous nature of OAG. To investigate the genetic basis of glaucoma in a large Finnish family with juvenile and adult onset OAG, we analysed the MYOC gene in family members. Glaucoma associated mutation (Thr377Met) was identified in the MYOC gene segregating with the disease in the family. This finding has great significance for the family and encourages investigating the MYOC gene also in other Finnish OAG families. In order to identify the genetic susceptibility loci for XFS, we carried out a genome-wide scan in the extended Finnish XFS family. This scan produced promising candidate locus on chromosomal region 18q12.1-21.33 and several additional putative susceptibility loci for XFS. This locus on chromosome 18 provides a solid starting point for the fine-scale mapping studies, which are needed to identify variants conferring susceptibility to XFS in the region. A case-control and family-based association study and family-based linkage study was performed to evaluate whether SNPs in the LOXL1 gene contain a risk for XFS, XFG or POAG in the Finnish patients. A significant association between the LOXL1 gene SNPs and XFS and XFG was confirmed in the Finnish population. However, no association was detected with POAG. Probably also other genetic and environmental factors are involved in the pathogenesis of XFS and XFG.
Resumo:
The structures of (1→3),(1→4)-β-D-glucans of oat bran, whole-grain oats and barley and processed foods were analysed. Various methods of hydrolysis of β-glucan, the content of insoluble fibre of whole grains of oats and barley and the solution behaviour of oat and barley β-glucans were studied. The isolated soluble β-glucans of oat bran and whole-grain oats and barley were hydrolysed with lichenase, an enzyme specific for (1→3),(1→4)-β-D-β-glucans. The amounts of oligosaccharides produced from bran were analysed with capillary electrophoresis and those from whole-grains with high-performance anion-exchange chromatography with pulse-amperometric detection. The main products were 3-O-β-cellobiosyl-D-glucose and 3-O-β-cellotriosyl-D-glucose, the oligosaccharides which have a degree of polymerisation denoted by DP3 and DP4. Small differences were detected between soluble and insoluble β-glucans and also between β-glucans of oats and barley. These differences can only be seen in the DP3:DP4 ratio which was higher for barley than for oat and also higher for insoluble than for soluble β-glucan. A greater proportion of barley β-glucan remained insoluble than of oat β-glucan. The molar masses of soluble β-glucans of oats and barley were the same as were those of insoluble β-glucans of oats and barley. To analyse the effects of cooking, baking, fermentation and drying, β-glucan was isolated from porridge, bread and fermentate and also from their starting materials. More β-glucan was released after cooking and less after baking. Drying decreased the extractability for bread and fermentate but increased it for porridge. Different hydrolysis methods of β-glucan were compared. Acid hydrolysis and the modified AOAC method gave similar results. The results of hydrolysis with lichenase gave higher recoveries than the other two. The combination of lichenase hydrolysis and high-performance anion-exchange chromatography with pulse-amperometric detection was found best for the analysis of β-glucan content. The content of insoluble fibre was higher for barley than for oats and the amount of β-glucan in the insoluble fibre fraction was higher for oats than for barley. The flow properties of both water and aqueous cuoxam solutions of oat and barley β-glucans were studied. Shear thinning was stronger for the water solutions of oat β-glucan than for barley β-glucan. In aqueous cuoxam shear thinning was not observed at the same concentration as in water but only with high concentration solutions. Then the viscosity of barley β-glucan was slightly higher than that of oat β-glucan. The oscillatory measurements showed that the crossover point of the G´ and G´´ curves was much lower for barley β-glucan than for oat β-glucan indicating a higher tendency towards solid-like behaviour for barley β-glucan than for oat β-glucan.
Resumo:
In technicolor theories the scalar sector of the Standard Model is replaced by a strongly interacting sector. Although the Standard Model has been exceptionally successful, the scalar sector causes theoretical problems that make these theories seem an attractive alternative. I begin my thesis by considering QCD, which is the known example of strong interactions. The theory exhibits two phenomena: confinement and chiral symmetry breaking. I find the low-energy dynamics to be similar to that of the sigma models. Then I analyze the problems of the Standard Model Higgs sector, mainly the unnaturalness and triviality. Motivated by the example of QCD, I introduce the minimal technicolor model to resolve these problems. I demonstrate the minimal model to be free of anomalies and then deduce the main elements of its low-energy particle spectrum. I find the particle spectrum contains massless or very light technipions, and also technibaryons and techni-vector mesons with a high mass of over 1 TeV. Standard Model fermions remain strictly massless at this stage. Thus I introduce the technicolor companion theory of flavor, called extended technicolor. I show that the Standard Model fermions and technihadrons receive masses, but that they remain too light. I also discuss flavor-changing neutral currents and precision electroweak measurements. I then show that walking technicolor models partly solve these problems. In these models, contrary to QCD, the coupling evolves slowly over a large energy scale. This behavior adds to the masses so that even the light technihadrons are too heavy to be detected at current particle accelerators. Also all observed masses of the Standard Model particles can be generated, except for the bottom and top quarks. Thus it is shown in this thesis that, excluding the masses of third generation quarks, theories based on walking technicolor can in principle produce the observed particle spectrum.
Resumo:
One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.
Resumo:
Asymmetrical flow field-flow fractionation (AsFlFFF) was constructed, and its applicability to industrial, biochemical, and pharmaceutical applications was studied. The effect of several parameters, such as pH, ionic strength, temperature and the reactants mixing ratios on the particle sizes, molar masses, and the formation of aggregates of macromolecules was determined by AsFlFFF. In the case of industrial application AsFlFFF proved to be a valuable tool in the characterization of the hydrodynamic particle sizes, molar masses and phase transition behavior of various poly(N-isopropylacrylamide) (PNIPAM) polymers as a function of viscosity and phase transition temperatures. The effect of sodium chloride salt and the molar ratio of cationic and anionic polyelectrolytes on the hydrodynamic particle sizes of poly (methacryloxyethyl trimethylammonium chloride) and poly (ethylene oxide)-block-poly (sodium methacrylate) and their complexes were studied. The particle sizes of PNIPAM polymers, and polyelectrolyte complexes measured by AsFlFFF were in agreement with those obtained by dynamic light scattering. The molar masses of PNIPAM polymers obtained by AsFlFFF and size exclusion chromatography agreed also well. In addition, AsFlFFF proved to be a practical technique in thermo responsive behavior studies of polymers at temperatures up to about 50 oC. The suitability of AsFlFFF for biological, biomedical, and pharmaceutical applications was proved, upon studying the lipid-protein/peptide interactions, and the stability of liposomes at different temperatures. AsFlFFF was applied to the studies on the hydrophobic and electrostatic interactions between cytochrome c (a basic peripheral protein) and anionic lipid, and oleic acid, and sodium dodecyl sulphate surfactant. A miniaturized AsFlFFF constructed in this study was exploited in the elucidation of the effect of copper (II), pH, ionic strength, and vortexing on the particle sizes of low-density lipoproteins.
Resumo:
Recent epidemiological studies have shown a consistent association of the mass concentration of urban air thoracic (PM10) and fine (PM2.5) particles with mortality and morbidity among cardiorespiratory patients. However, the chemical characteristics of different particulate size ranges and the biological mechanisms responsible for these adverse health effects are not well known. The principal aims of this thesis were to validate a high volume cascade impactor (HVCI) for the collection of particulate matter for physicochemical and toxicological studies, and to make an in-depth chemical and source characterisation of samples collected during different pollution situations. The particulate samples were collected with the HVCI, virtual impactors and a Berner low pressure impactor in six European cities: Helsinki, Duisburg, Prague, Amsterdam, Barcelona and Athens. The samples were analysed for particle mass, common ions, total and water-soluble elements as well as elemental and organic carbon. Laboratory calibration and field comparisons indicated that the HVCI can provide a unique large capacity, high efficiency sampling of size-segregated aerosol particles. The cutoff sizes of the recommended HVCI configuration were 2.4, 0.9 and 0.2 μm. The HVCI mass concentrations were in a good agreement with the reference methods, but the chemical composition of especially the fine particulate samples showed some differences. This implies that the chemical characterization of the exposure variable in toxicological studies needs to be done from the same HVCI samples as used in cell and animal studies. The data from parallel, low volume reference samplers provide valuable additional information for chemical mass closure and source assessment. The major components of PM2.5 in the virtual impactor samples were carbonaceous compounds, secondary inorganic ions and sea salt, whereas those of coarse particles (PM2.5-10) were soil-derived compounds, carbonaceous compounds, sea salt and nitrate. The major and minor components together accounted for 77-106% and 77-96% of the gravimetrically-measured masses of fine and coarse particles, respectively. Relatively large differences between sampling campaigns were observed in the organic carbon content of the PM2.5 samples as well as the mineral composition of the PM2.5-10 samples. A source assessment based on chemical tracers suggested clear differences in the dominant sources (e.g. traffic, residential heating with solid fuels, metal industry plants, regional or long-range transport) between the sampling campaigns. In summary, the field campaigns exhibited different profiles with regard to particulate sources, size distribution and chemical composition, thus, providing a highly useful setup for toxicological studies on the size-segregated HVCI samples.
Resumo:
Polymer protected gold nanoparticles have successfully been synthesized by both "grafting-from" and "grafting-to" techniques. The synthesis methods of the gold particles were systematically studied. Two chemically different homopolymers were used to protect gold particles: thermo-responsive poly(N-isopropylacrylamide), PNIPAM, and polystyrene, PS. Both polymers were synthesized by using a controlled/living radical polymerization process, reversible addition-fragmentation chain transfer (RAFT) polymerization, to obtain monodisperse polymers of various molar masses and carrying dithiobenzoate end groups. Hence, particles protected either with PNIPAM, PNIPAM-AuNPs, or with a mixture of two polymers, PNIPAM/PS-AuNPs (i.e., amphiphilic gold nanoparticles), were prepared. The particles contain monodisperse polymer shells, though the cores are somewhat polydisperse. Aqueous PNIPAM-AuNPs prepared using a "grafting-from" technique, show thermo-responsive properties derived from the tethered PNIPAM chains. For PNIPAM-AuNPs prepared using a "grafting-to" technique, two-phase transitions of PNIPAM were observed in the microcalorimetric studies of the aqueous solutions. The first transition with a sharp and narrow endothermic peak occurs at lower temperature, and the second one with a broader peak at higher temperature. In the first transition PNIPAM segments show much higher cooperativity than in the second one. The observations are tentatively rationalized by assuming that the PNIPAM brush can be subdivided into two zones, an inner and an outer one. In the inner zone, the PNIPAM segments are close to the gold surface, densely packed, less hydrated, and undergo the first transition. In the outer zone, on the other hand, the PNIPAM segments are looser and more hydrated, adopt a restricted random coil conformation, and show a phase transition, which is dependent on both particle concentration and the chemical nature of the end groups of the PNIPAM chains. Monolayers of the amphiphilic gold nanoparticles at the air-water interface show several characteristic regions upon compression in a Langmuir trough at room temperature. These can be attributed to the polymer conformational transitions from a pancake to a brush. Also, the compression isotherms show temperature dependence due to the thermo-responsive properties of the tethered PNIPAM chains. The films were successfully deposited on substrates by Langmuir-Blodgett technique. The sessile drop contact angle measurements conducted on both sides of the monolayer deposited at room temperature reveal two slightly different contact angles, that may indicate phase separation between the tethered PNIPAM and PS chains on the gold core. The optical properties of amphiphilic gold nanoparticles were studied both in situ at the air-water interface and on the deposited films. The in situ SPR band of the monolayer shows a blue shift with compression, while a red shift with the deposition cycle occurs in the deposited films. The blue shift is compression-induced and closely related to the conformational change of the tethered PNIPAM chains, which may cause a decrease in the polarity of the local environment of the gold cores. The red shift in the deposited films is due to a weak interparticle coupling between adjacent particles. Temperature effects on the SPR band in both cases were also investigated. In the in situ case, at a constant surface pressure, an increase in temperature leads to a red shift in the SPR, likely due to the shrinking of the tethered PNIPAM chains, as well as to a slight decrease of the distance between the adjacent particles resulting in an increase in the interparticle coupling. However, in the case of the deposited films, the SPR band red-shifts with the deposition cycles more at a high temperature than at a low temperature. This is because the compressibility of the polymer coated gold nanoparticles at a high temperature leads to a smaller interparticle distance, resulting in an increase of the interparticle coupling in the deposited multilayers.
Resumo:
Palaeoenvironments of the latter half of the Weichselian ice age and the transition to the Holocene, from ca. 52 to 4 ka, were investigated using isotopic analysis of oxygen, carbon and strontium in mammal skeletal apatite. The study material consisted predominantly of subfossil bones and teeth of the woolly mammoth (Mammuthus primigenius Blumenbach), collected from Europe and Wrangel Island, northeastern Siberia. All samples have been radiocarbon dated, and their ages range from >52 ka to 4 ka. Altogether, 100 specimens were sampled for the isotopic work. In Europe, the studies focused on the glacial palaeoclimate and habitat palaeoecology. To minimise the influence of possible diagenetic effects, the palaeoclimatological and ecological reconstructions were based on the enamel samples only. The results of the oxygen isotope analysis of mammoth enamel phosphate from Finland and adjacent nortwestern Russia, Estonia, Latvia, Lithuania, Poland, Denmark and Sweden provide the first estimate of oxygen isotope values in glacial precipitation in northern Europe. The glacial precipitation oxygen isotope values range from ca. -9.2±1.5 in western Denmark to -15.3 in Kirillov, northwestern Russia. These values are 0.6-4.1 lower than those in present-day precipitation, with the largest changes recorded in the currently marine influenced southern Sweden and the Baltic region. The new enamel-derived oxygen isotope data from this study, combined with oxygen isotope records from earlier investigations on mammoth tooth enamel and palaeogroundwaters, facilitate a reconstruction of the spatial patterns of the oxygen isotope values of precipitation and palaeotemperatures over much of Europe. The reconstructed geographic pattern of oxygen isotope levels in precipitation during 52-24 ka reflects the progressive isotopic depletion of air masses moving northeast, consistent with a westerly source of moisture for the entire region, and a circulation pattern similar to that of the present-day. The application of regionally varied δ/T-slopes, estimated from palaeogroundwater data and modern spatial correlations, yield reasonable estimates of glacial surface temperatures in Europe and imply 2-9°C lower long-term mean annual surface temperatures during the glacial period. The isotopic composition of carbon in the enamel samples indicates a pure C3 diet for the European mammoths, in agreement with previous investigations of mammoth ecology. A faint geographical gradient in the carbon isotope values of enamel is discernible, with more negative values in the northeast. The spatial trend is consistent with the climatic implications of the enamel oxygen isotope data, but may also suggest regional differences in habitat openness. The palaeogeographical changes caused by the eustatic rise of global sea level at the end of the Weichselian ice age was investigated on Wrangel Island, using the strontium isotope (Sr-87/Sr-86) ratios in the skeletal apatite of the local mammoth fauna. The diagenetic evaluations suggest good preservation of the original Sr isotope ratios, even in the bone specimens included in the study material. To estimate present-day environmental Sr isotope values on Wrangel Island, bioapatite samples from modern reindeer and muskoxen, as well as surface waters from rivers and ice wedges were analysed. A significant shift towards more radiogenic bioapatite Sr isotope ratios, from 0.71218 ± 0.00103 to 0.71491 ± 0.00138, marks the beginning of the Holocene. This implies a change in the migration patterns of the mammals, ultimately reflecting the inundation of the mainland connection and isolation of the population. The bioapatite Sr isotope data supports published coastline reconstructions placing the time of separation from the mainland to ca. 10-10.5 ka ago. The shift towards more radiogenic Sr isotope values in mid-Holocene subfossil remains after 8 ka ago reflects the rapid rise of the sea level from 10 to 8 ka, resulting in a considerable reduction of the accessible range area on the early Wrangel Island.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.