38 resultados para MAGNETIC REVERSAL FREQUENCY
em Helda - Digital Repository of University of Helsinki
Resumo:
In the present work, effects of stimulus repetition and change in a continuous stimulus stream on the processing of somatosensory information in the human brain were studied. Human scalp-recorded somatosensory event-related potentials (ERPs) and magnetoencephalographic (MEG) responses rapidly diminished with stimulus repetition when mechanical or electric stimuli were applied to fingers. On the contrary, when the ERPs and multi-unit a ctivity (MUA) were directly recorded from the primary (SI) and secondary (SII) somatosensory cortices in a monkey, there was no marked decrement in the somatosensory responses as a function of stimulus repetition. These results suggest that this rate effect is not due to the response diminution in the SI and SII cortices. Obviously the responses to the first stimulus after a long "silent" period are nhanced due to unspecific initial orientation, originating in more broadly distributed and/or deeper neural structures, perhaps in the prefrontal cortices. With fast repetition rates not only the late unspecific but also some early specific somatosensory ERPs were diminished in amplitude. The fast decrease of the ERPs as a function of stimulus repetition is mainly due to the disappearance of the orientation effect and with faster repetition rates additively due to stimulus specific refractoriness. A sudden infrequent change in the continuous stimulus stream also enhanced somatosensory MEG responses to electric stimuli applied to different fingers. These responses were quite similar to those elicited by the deviant stimuli alone when the frequent standard stimuli were omitted. This enhancement was obviously due to the release from refractoriness because the neural structures generating the responses to the infrequent deviants had more time to recover from the refractoriness than the respective structures for the standards. Infrequent deviant mechanical stimuli among frequent standard stimuli also enhanced somatosensory ERPs and, in addition, they elicited a new negative wave which did not occur in the deviants-alone condition. This extra negativity could be recorded to deviations in the stimulation site and in the frequency of the vibratory stimuli. This response is probably a somatosensory analogue of the auditory mismatch negativity (MMN) which has been suggested to reflect a neural mismatch process between the sensory input and the sensory memory trace.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.
Resumo:
The synchronization of neuronal activity, especially in the beta- (14-30 Hz) /gamma- (30 80 Hz) frequency bands, is thought to provide a means for the integration of anatomically distributed processing and for the formation of transient neuronal assemblies. Thus non-stimulus locked (i.e. induced) gamma-band oscillations are believed to underlie feature binding and the formation of neuronal object representations. On the other hand, the functional roles of neuronal oscillations in slower theta- (4 8 Hz) and alpha- (8 14 Hz) frequency bands remain controversial. In addition, early stimulus-locked activity has been largely ignored, as it is believed to reflect merely the physical properties of sensory stimuli. With human neuromagnetic recordings, both the functional roles of gamma- and alpha-band oscillations and the significance of early stimulus-locked activity in neuronal processing were examined in this thesis. Study I of this thesis shows that even the stimulus-locked (evoked) gamma oscillations were sensitive to high-level stimulus features for speech and non-speech sounds, suggesting that they may underlie the formation of early neuronal object representations for stimuli with a behavioural relevance. Study II shows that neuronal processing for consciously perceived and unperceived stimuli differed as early as 30 ms after stimulus onset. This study also showed that the alpha band oscillations selectively correlated with conscious perception. Study III, in turn, shows that prestimulus alpha-band oscillations influence the subsequent detection and processing of sensory stimuli. Further, in Study IV, we asked whether phase synchronization between distinct frequency bands is present in cortical circuits. This study revealed prominent task-sensitive phase synchrony between alpha and beta/gamma oscillations. Finally, the implications of Studies II, III, and IV to the broader scientific context are analysed in the last study of this thesis (V). I suggest, in this thesis that neuronal processing may be extremely fast and that the evoked response is important for cognitive processes. I also propose that alpha oscillations define the global neuronal workspace of perception, action, and consciousness and, further, that cross-frequency synchronization is required for the integration of neuronal object representations into global neuronal workspace.
Resumo:
Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.
Resumo:
The Standard Model of particle physics consists of the quantum electrodynamics (QED) and the weak and strong nuclear interactions. The QED is the basis for molecular properties, and thus it defines much of the world we see. The weak nuclear interaction is responsible for decays of nuclei, among other things, and in principle, it should also effects at the molecular scale. The strong nuclear interaction is hidden in interactions inside nuclei. From the high-energy and atomic experiments it is known that the weak interaction does not conserve parity. Consequently, the weak interaction and specifically the exchange of the Z^0 boson between a nucleon and an electron induces small energy shifts of different sign for mirror image molecules. This in turn will make the other enantiomer of a molecule energetically favorable than the other and also shifts the spectral lines of the mirror image pair of molecules into different directions creating a split. Parity violation (PV) in molecules, however, has not been observed. The topic of this thesis is how the weak interaction affects certain molecular magnetic properties, namely certain parameters of nuclear magnetic resonance (NMR) and electron spin resonance (ESR) spectroscopies. The thesis consists of numerical estimates of NMR and ESR spectral parameters and investigations of the effects of different aspects of quantum chemical computations to them. PV contributions to the NMR shielding and spin-spin coupling constants are investigated from the computational point of view. All the aspects of quantum chemical electronic structure computations are found to be very important, which makes accurate computations challenging. Effects of molecular geometry are also investigated using a model system of polysilyene chains. PV contribution to the NMR shielding constant is found to saturate after the chain reaches a certain length, but the effects of local geometry can be large. Rigorous vibrational averaging is also performed for a relatively small and rigid molecule. Vibrational corrections to the PV contribution are found to be only a couple of per cents. PV contributions to the ESR g-tensor are also evaluated using a series of molecules. Unfortunately, all the estimates are below the experimental limits, but PV in some of the heavier molecules comes close to the present day experimental resolution.
Resumo:
NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.
Resumo:
Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.
Resumo:
This thesis discusses the prehistoric human disturbance during the Holocene by means of case studies using detailed high-resolution pollen analysis from lake sediment. The four lakes studied are situated between 61o 40' and 61o 50' latitudes in the Finnish Karelian inland area and vary between 2.4 and 28.8 ha in size. The existence of Early Metal Age population was one important question. Another study question concerned the development of grazing, and the relationship between slash-and-burn cultivation and permanent field cultivation. The results were presented as pollen percentages and pollen concentrations (grains cm 3). Accumulation values (grains cm 2 yr 1) were calculated for Lake Nautajärvi and Lake Orijärvi sediment, where the sediment accumulation rate was precisely determined. Sediment properties were determined using loss-on-ignition (LOI) and magnetic susceptibility (k). Dating methods used include both conventional and AMS 14C determinations, paleomagnetic dating and varve choronology. The isolation of Lake Kirjavalampi on the northern shore of Lake Ladoga took place ca. 1460 1300 BC. The long sediment cores from Finland, Lake Kirkkolampi and Lake Orijärvi in southeastern Finland and Lake Nautajärvi in south central Finland all extended back to the Early Holocene and were isolated from the Baltic basin ca. 9600 BC, 8600 BC and 7675 BC, respectively. In the long sediment cores, the expansion of Alnus was visible between 7200 - 6840 BC. The spread of Tilia was dated in Lake Kirkkolampi to 6600 BC, in Lake Orijärvi to 5000 BC and at Lake Nautajärvi to 4600 BC. Picea is present locally in Lake Kirkkolampi from 4340 BC, in Lake Orijärvi from 6520 BC and in Lake Nautajärvi from 3500 BC onwards. The first modifications in the pollen data, apparently connected to anthropogenic impacts, were dated to the beginning of the Early Metal Period, 1880 1600 BC. Anthropogenic activity became clear in all the study sites by the end of the Early Metal Period, between 500 BC AD 300. According to Secale pollen, slash-and-burn cultivation was practised around the eastern study lakes from AD 300 600 onwards, and at the study site in central Finland from AD 880 onwards. The overall human impact, however, remained low in the studied sites until the Late Iron Age. Increasing human activity, including an increase in fire frequency was detected from AD 800 900 onwards in the study sites in eastern Finland. In Lake Kirkkolampi, this included cultivation on permanent fields, but in Lake Orijärvi, permanent field cultivation became visible as late as AD 1220, even when the macrofossil data demonstrated the onset of cultivation on permanent fields as early as the 7th century AD. On the northern shore of Lake Ladoga, local activity became visible from ca. AD 1260 onwards and at Lake Nautajärvi, sediment the local occupation was traceable from 1420 AD onwards. The highest values of Secale pollen were recorded both in Lake Orijärvi and Lake Kirjavalampi between ca. AD 1700 1900, and could be associated with the most intensive period of slash-and-burn from AD 1750 to 1850 in eastern Finland.
Resumo:
The importance of supercontinents in our understanding of the geological evolution of the planet Earth has been recently emphasized. The role of paleomagnetism in reconstructing lithospheric blocks in their ancient paleopositions is vital. Paleomagnetism is the only quantitative tool for providing ancient latitudes and azimuthal orientations of continents. It also yields information of content of the geomagnetic field in the past. In order to obtain a continuous record on the positions of continents, dated intrusive rocks are required in temporal progression. This is not always possible due to pulse-like occurrences of dykes. In this work we demonstrate that studies of meteorite impact-related rocks may fill some gaps in the paleomagnetic record. This dissertation is based on paleomagnetic and rock magnetic data obtained from samples of the Jänisjärvi impact structure (Russian Karelia, most recent 40Ar-39Ar age of 682 Ma), the Salla diabase dyke (North Finland, U-Pb 1122 Ma), the Valaam monzodioritic sill (Russian Karelia, U-Pb 1458 Ma), and the Vredefort impact structure (South Africa, 2023 Ma). The paleomagnetic study of Jänisjärvi samples was made in order to obtain a pole for Baltica, which lacks paleomagnetic data from 750 to ca. 600 Ma. The position of Baltica at ca. 700 Ma is relevant in order to verify whether the supercontinent Rodinia was already fragmented. The paleomagnetic study of the Salla dyke was conducted to examine the position of Baltica at the onset of supercontinent Rodinia's formation. The virtual geomagnetic pole (VGP) from Salla dyke provides hints that the Mesoproterozoic Baltica - Laurentia unity in the Hudsonland (Columbia, Nuna) supercontinent assembly may have lasted until 1.12 Ga. Moreover, the new VGP of Salla dyke provides new constraint on the timing of the rotation of Baltica relative to Laurentia (e.g. Gower et al., 1990). A paleomagnetic study of the Valaam sill was carried out in order to shed light into the question of existence of Baltica-Laurentia unity in the supercontinent Hudsonland. Combined with results from dyke complex of the Lake Ladoga region (Schehrbakova et al., 2008) a new robust paleomagnetic pole for Baltica is obtained. This pole places Baltica on a latitude of 10°. This low latitude location is supported also by Mesoproterozoic 1.5 1.3 Ga red-bed sedimentation (for example the Satakunta sandstone). The Vredefort impactite samples provide a well dated (2.02 Ga) pole for the Kaapvaal Craton. Rock magnetic data reveal unusually high Koenigsberger ratios (Q values) in all studied lithologies of the Vredefort dome. The high Q values are now first time also seen in samples from the Johannesburg Dome (ca. 120 km away) where there is no impact evidence. Thus, a direct causative link of high Q values to the Vredefort impact event can be ruled out.
Resumo:
Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.
Resumo:
The rupture of a cerebral artery aneurysm causes a devastating subarachnoid hemorrhage (SAH), with a mortality of almost 50% during the first month. Each year, 8-11/100 000 people suffer from aneurysmal SAH in Western countries, but the number is twice as high in Finland and Japan. The disease is most common among those of working age, the mean age at rupture being 50-55 years. Unruptured cerebral aneurysms are found in 2-6% of the population, but knowledge about the true risk of rupture is limited. The vast majority of aneurysms should be considered rupture-prone, and treatment for these patients is warranted. Both unruptured and ruptured aneurysms can be treated by either microsurgical clipping or endovascular embolization. In a standard microsurgical procedure, the neck of the aneurysm is closed by a metal clip, sealing off the aneurysm from the circulation. Endovascular embolization is performed by packing the aneurysm from the inside of the vessel lumen with detachable platinum coils. Coiling is associated with slightly lower morbidity and mortality than microsurgery, but the long-term results of microsurgically treated aneurysms are better. Endovascular treatment methods are constantly being developed further in order to achieve better long-term results. New coils and novel embolic agents need to be tested in a variety of animal models before they can be used in humans. In this study, we developed an experimental rat aneurysm model and showed its suitability for testing endovascular devices. We optimized noninvasive MRI sequences at 4.7 Tesla for follow-up of coiled experimental aneurysms and for volumetric measurement of aneurysm neck remnants. We used this model to compare platinum coils with polyglycolic-polylactic acid (PGLA) -coated coils, and showed the benefits of the latter in this model. The experimental aneurysm model and the imaging methods also gave insight into the mechanisms involved in aneurysm formation, and the model can be used in the development of novel imaging techniques. This model is affordable, easily reproducible, reliable, and suitable for MRI follow-up. It is also suitable for endovascular treatment, and it evades spontaneous occlusion.