12 resultados para 3-DIMENSIONAL MAGNETOHYDRODYNAMIC SIMULATIONS

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parkinson’s disease (PD) is the second most common neurodegenerative disease among the elderly. Its etiology is unknown and no disease-modifying drugs are available. Thus, more information concerning its pathogenesis is needed. Among other genes, mutated PTEN-induced kinase 1 (PINK1) has been linked to early-onset and sporadic PD, but its mode of action is poorly understood. Most animal models of PD are based on the use of the neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP). MPTP is metabolized to MPP+ by monoamine oxidase B (MAO B) and causes cell death of dopaminergic neurons in the substantia nigra in mammals. Zebrafish has been a widely used model organism in developmental biology, but is now emerging as a model for human diseases due to its ideal combination of properties. Zebrafish are inexpensive and easy to maintain, develop rapidly, breed in large quantities producing transparent embryos, and are readily manipulated by various methods, particularly genetic ones. In addition, zebrafish are vertebrate animals and results derived from zebrafish may be more applicable to mammals than results from invertebrate genetic models such as Drosophila melanogaster and Caenorhabditis elegans. However, the similarity cannot be taken for granted. The aim of this study was to establish and test a PD model using larval zebrafish. The developing monoaminergic neuronal systems of larval zebrafish were investigated. We identified and classified 17 catecholaminergic and 9 serotonergic neuron populations in the zebrafish brain. A 3-dimensional atlas was created to facilitate future research. Only one gene encoding MAO was found in the zebrafish genome. Zebrafish MAO showed MAO A-type substrate specificity, but non-A-non-B inhibitor specificity. Distribution of MAO in larval and adult zebrafish brains was both diffuse and distinctly cellular. Inhibition of MAO during larval development led to markedly elevated 5-hydroxytryptamine (serotonin, 5-HT) levels, which decreased the locomotion of the fish. MPTP exposure caused a transient loss of cells in specific aminergic cell populations and decreased locomotion. MPTP-induced changes could be rescued by the MAO B inhibitor deprenyl, suggesting a role for MAO in MPTP toxicity. MPP+ affected only one catecholaminergic cell population; thus, the action of MPP+ was more selective than that of MPTP. The zebrafish PINK1 gene was cloned in zebrafish, and morpholino oligonucleotides were used to suppress its expression in larval zebrafish. The functional domains and expression pattern of zebrafish PINK1 resembled those of other vertebrates, suggesting that zebrafish is a feasible model for studying PINK1. Translation inhibition resulted in cell loss of the same catecholaminergic cell populations as MPTP and MPP+. Inactivation of PINK1 sensitized larval zebrafish to subefficacious doses of MPTP, causing a decrease in locomotion and cell loss in one dopaminergic cell population. Zebrafish appears to be a feasible model for studying PD, since its aminergic systems, mode of action of MPTP, and functions of PINK1 resemble those of mammalians. However, the functions of zebrafish MAO differ from the two forms of MAO found in mammals. Future studies using zebrafish PD models should utilize the advantages specific to zebrafish, such as the ability to execute large-scale genetic or drug screens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical weather prediction (NWP) models provide the basis for weather forecasting by simulating the evolution of the atmospheric state. A good forecast requires that the initial state of the atmosphere is known accurately, and that the NWP model is a realistic representation of the atmosphere. Data assimilation methods are used to produce initial conditions for NWP models. The NWP model background field, typically a short-range forecast, is updated with observations in a statistically optimal way. The objective in this thesis has been to develope methods in order to allow data assimilation of Doppler radar radial wind observations. The work has been carried out in the High Resolution Limited Area Model (HIRLAM) 3-dimensional variational data assimilation framework. Observation modelling is a key element in exploiting indirect observations of the model variables. In the radar radial wind observation modelling, the vertical model wind profile is interpolated to the observation location, and the projection of the model wind vector on the radar pulse path is calculated. The vertical broadening of the radar pulse volume, and the bending of the radar pulse path due to atmospheric conditions are taken into account. Radar radial wind observations are modelled within observation errors which consist of instrumental, modelling, and representativeness errors. Systematic and random modelling errors can be minimized by accurate observation modelling. The impact of the random part of the instrumental and representativeness errors can be decreased by calculating spatial averages from the raw observations. Model experiments indicate that the spatial averaging clearly improves the fit of the radial wind observations to the model in terms of observation minus model background (OmB) standard deviation. Monitoring the quality of the observations is an important aspect, especially when a new observation type is introduced into a data assimilation system. Calculating the bias for radial wind observations in a conventional way can result in zero even in case there are systematic differences in the wind speed and/or direction. A bias estimation method designed for this observation type is introduced in the thesis. Doppler radar radial wind observation modelling, together with the bias estimation method, enables the exploitation of the radial wind observations also for NWP model validation. The one-month model experiments performed with the HIRLAM model versions differing only in a surface stress parameterization detail indicate that the use of radar wind observations in NWP model validation is very beneficial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Earlier work has suggested that large-scale dynamos can reach and maintain equipartition field strengths on a dynamical time scale only if magnetic helicity of the fluctuating field can be shed from the domain through open boundaries. To test this scenario in convection-driven dynamos by comparing results for open and closed boundary conditions. Three-dimensional numerical simulations of turbulent compressible convection with shear and rotation are used to study the effects of boundary conditions on the excitation and saturation level of large-scale dynamos. Open (vertical field) and closed (perfect conductor) boundary conditions are used for the magnetic field. The contours of shear are vertical, crossing the outer surface, and are thus ideally suited for driving a shear-induced magnetic helicity flux. We find that for given shear and rotation rate, the growth rate of the magnetic field is larger if open boundary conditions are used. The growth rate first increases for small magnetic Reynolds number, Rm, but then levels off at an approximately constant value for intermediate values of Rm. For large enough Rm, a small-scale dynamo is excited and the growth rate in this regime increases proportional to Rm^(1/2). In the nonlinear regime, the saturation level of the energy of the mean magnetic field is independent of Rm when open boundaries are used. In the case of perfect conductor boundaries, the saturation level first increases as a function of Rm, but then decreases proportional to Rm^(-1) for Rm > 30, indicative of catastrophic quenching. These results suggest that the shear-induced magnetic helicity flux is efficient in alleviating catastrophic quenching when open boundaries are used. The horizontally averaged mean field is still weakly decreasing as a function of Rm even for open boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall acceleration, as well as the asymmetry of variance of acceleration within the front and rear pair of legs. In experiment 1, the asymmetry of variance of acceleration in the front and rear legs was positively correlated with overall gait and the visually assessed asymmetry of the steps (r ≥0.6). Walking speed was negatively correlated with the asymmetry of variance of the rear legs (r=−0.8) and positively correlated with the acceleration and the variance of acceleration of each leg and back (r ≥0.7). In experiment 2, cows had lower gait scores [2.3 vs. 2.6; standard error of the difference (SED)=0.1, measured on a 5-point scale] and lower scores for asymmetry of the steps (18.0 vs. 23.1; SED=2.2, measured on a continuous 100-unit scale) when they walked on rubber compared with concrete, and their walking speed increased (1.28 vs. 1.22m/s; SED=0.02). The acceleration of the front (1.67 vs. 1.72g; SED=0.02) and rear (1.62 vs. 1.67g; SED=0.02) legs and the variance of acceleration of the rear legs (0.88 vs. 0.94g; SED=0.03) were lower when cows walked on rubber compared with concrete. Despite the improvements in gait score that occurred when cows walked on rubber, the asymmetry of variance of acceleration of the front leg was higher (15.2 vs. 10.4%; SED=2.0). The difference in walking speed between concrete and rubber correlated with the difference in the mean acceleration and the difference in the variance of acceleration of the legs and back (r ≥0.6). Three-dimensional accelerometers seem to be a promising tool for lameness detection on farm and to study walking surfaces, especially when attached to a leg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important challenge in forest industry is to get the appropriate raw material out from the forests to the wood processing industry. Growth and stem reconstruction simulators are therefore increasingly integrated in industrial conversion simulators, for linking the properties of wooden products to the three-dimensional structure of stems and their growing conditions. Static simulators predict the wood properties from stem dimensions at the end of a growth simulation period, whereas in dynamic approaches, the structural components, e.g. branches, are incremented along with the growth processes. The dynamic approach can be applied to stem reconstruction by predicting the three-dimensional stem structure from external tree variables (i.e. age, height) as a result of growth to the current state. In this study, a dynamic growth simulator, PipeQual, and a stem reconstruction simulator, RetroSTEM, are adapted to Norway spruce (Picea abies [L.] Karst.) to predict the three-dimensional structure of stems (tapers, branchiness, wood basic density) over time such that both simulators can be integrated in a sawing simulator. The parameterisation of the PipeQual and RetroSTEM simulators for Norway spruce relied on the theoretically based description of tree structure developing in the growth process and following certain conservative structural regularities while allowing for plasticity in the crown development. The crown expressed both regularity and plasticity in its development, as the vertical foliage density peaked regularly at about 5 m from the stem apex, varying below that with tree age and dominance position (Study I). Conservative stem structure was characterized in terms of (1) the pipe ratios between foliage mass and branch and stem cross-sectional areas at crown base, (2) the allometric relationship between foliage mass and crown length, (3) mean branch length relative to crown length and (4) form coefficients in branches and stem (Study II). The pipe ratio between branch and stem cross-sectional area at crown base, and mean branch length relative to the crown length may differ in trees before and after canopy closure, but the variation should be further analysed in stands of different ages and densities with varying site fertilities and climates. The predictions of the PipeQual and RetroSTEM simulators were evaluated by comparing the simulated values to measured ones (Study III, IV). Both simulators predicted stem taper and branch diameter at the individual tree level with a small bias. RetroSTEM predictions of wood density were accurate. For focusing on even more accurate predictions of stem diameters and branchiness along the stem, both simulators should be further improved by revising the following aspects in the simulators: the relationship between foliage and stem sapwood area in the upper stem, the error source in branch sizes, the crown base development and the height growth models in RetroSTEM. In Study V, the RetroSTEM simulator was integrated in the InnoSIM sawing simulator, and according to the pilot simulations, this turned out to be an efficient tool for readily producing stand scale information about stem sizes and structure when approximating the available assortments of wood products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The incidence of all forms of congenital heart defects is 0.75%. For patients with congenital heart defects, life-expectancy has improved with new treatment modalities. Structural heart defects may require surgical or catheter treatment which may be corrective or palliative. Even those with corrective therapy need regular follow-up due to residual lesions, late sequelae, and possible complications after interventions. Aims: The aim of this thesis was to evaluate cardiac function before and after treatment for volume overload of the right ventricle (RV) caused by atrial septal defect (ASD), volume overload of the left ventricle (LV) caused by patent ductus arteriosus (PDA), and pressure overload of the LV caused by coarctation of the aorta (CoA), and to evaluate cardiac function in patients with Mulibrey nanism. Methods: In Study I, of the 24 children with ASD, 7 underwent surgical correction and 17 percutaneous occlusion of ASD. Study II had 33 patients with PDA undergoing percutaneous occlusion. In Study III, 28 patients with CoA underwent either surgical correction or percutaneous balloon dilatation of CoA. Study IV comprised 26 children with Mulibrey nanism. A total of 76 healthy voluntary children were examined as a control group. In each study, controls were matched to patients. All patients and controls underwent clinical cardiovascular examinations, two-dimensional (2D) and three-dimensional (3D) echocardiographic examinations, and blood sampling for measurement of natriuretic peptides prior to the intervention and twice or three times thereafter. Control children were examined once by 2D and 3D echocardiography. M-mode echocardiography was performed from the parasternal long axis view directed by 2D echocardiography. The left atrium-to-aorta (LA/Ao) ratio was calculated as an index of LA size. The end-diastolic and end-systolic dimensions of LV as well as the end-diastolic thicknesses of the interventricular septum and LV posterior wall were measured. LV volumes, and the fractional shortening (FS) and ejection fraction (EF) as indices of contractility were then calculated, and the z scores of LV dimensions determined. Diastolic function of LV was estimated from the mitral inflow signal obtained by Doppler echocardiography. In three-dimensional echocardiography, time-volume curves were used to determine end-diastolic and end-systolic volumes, stroke volume, and EF. Diastolic and systolic function of LV was estimated from the calculated first derivatives of these curves. Results: (I): In all children with ASD, during the one-year follow-up, the z score of the RV end-diastolic diameter decreased and that of LV increased. However, dilatation of RV did not resolve entirely during the follow-up in either treatment group. In addition, the size of LV increased more slowly in the surgical subgroup but reached control levels in both groups. Concentrations of natriuretic peptides in patients treated percutaneously increased during the first month after ASD closure and normalized thereafter, but in patients treated surgically, they remained higher than in controls. (II): In the PDA group, at baseline, the end-diastolic diameter of LV measured over 2SD in 5 of 33 patients. The median N-terminal pro-brain natriuretic peptide (proBNP) concentration before closure measured 72 ng/l in the control group and 141 ng/l in the PDA group (P = 0.001) and 6 months after closure measured 78.5 ng/l (P = NS). Patients differed from control subjects in indices of LV diastolic and systolic function at baseline, but by the end of follow-up, all these differences had disappeared. Even in the subgroup of patients with normal-sized LV at baseline, the LV end-diastolic volume decreased significantly during follow-up. (III): Before repair, the size and wall thickness of LV were higher in patients with CoA than in controls. Systolic blood pressure measured a median 123 mm Hg in patients before repair (P < 0.001) and 103 mm Hg one year thereafter, and 101 mm Hg in controls. The diameter of the coarctation segment measured a median 3.0 mm at baseline, and 7.9 at the 12-month (P = 0.006) follow-up. Thicknesses of the interventricular septum and posterior wall of the LV decreased after repair but increased to the initial level one year thereafter. The velocity time integrals of mitral inflow increased, but no changes were evident in LV dimensions or contractility. During follow-up, serum levels of natriuretic peptides decreased correlating with diastolic and systolic indices of LV function in 2D and 3D echocardiography. (IV): In 2D echocardiography, the interventricular septum and LV posterior wall were thicker, and velocity time integrals of mitral inflow shorter in patients with Mulibrey nanism than in controls. In 3D echocardiography, LV end-diastolic volume measured a median 51.9 (range 33.3 to 73.4) ml/m² in patients and 59.7 (range 37.6 to 87.6) ml/m² in controls (P = 0.040), and serum levels of ANPN and proBNP a median 0.54 (range 0.04 to 4.7) nmol/l and 289 (range 18 to 9170) ng/l, in patients and 0.28 (range 0.09 to 0.72) nmol/l (P < 0.001) and 54 (range 26 to 139) ng/l (P < 0.001) in controls. They correlated with several indices of diastolic LV function. Conclusions (I): During the one-year follow-up after the ASD closure, RV size decreased but did not normalize in all patients. The size of the LV normalized after ASD closure but the increase in LV size was slower in patients treated surgically than in those treated with the percutaneous technique. Serum levels of ANPN and proBNP were elevated prior to ASD closure but decreased thereafter to control levels in patients treated with the percutaneous technique but not in those treated surgically. (II): Changes in LV volume and function caused by PDA disappeared by 6 months after percutaneous closure. Even the children with normal-sized LV benefited from the procedure. (III): After repair of CoA, the RV size and the velocity time integrals of mitral inflow increased, and serum levels of natriuretic peptides decreased. Patients need close follow-up, despite cessation of LV pressure overload, since LV hypertrophy persisted even in normotensive patients with normal growth of the coarctation segment. (IV): In children with Mulibrey nanism, the LV wall was hypertrophied, with myocardial restriction and impairment of LV function. Significant correlations appeared between indices of LV function, size of the left atrium, and levels of natriuretic peptides, indicating that measurement of serum levels of natriuretic peptides can be used in the clinical follow-up of this patient group despite its dependence on loading conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon nanotubes, seamless cylinders made from carbon atoms, have outstanding characteristics: inherent nano-size, record-high Young’s modulus, high thermal stability and chemical inertness. They also have extraordinary electronic properties: in addition to extremely high conductance, they can be both metals and semiconductors without any external doping, just due to minute changes in the arrangements of atoms. As traditional silicon-based devices are reaching the level of miniaturisation where leakage currents become a problem, these properties make nanotubes a promising material for applications in nanoelectronics. However, several obstacles must be overcome for the development of nanotube-based nanoelectronics. One of them is the ability to modify locally the electronic structure of carbon nanotubes and create reliable interconnects between nanotubes and metal contacts which likely can be used for integration of the nanotubes in macroscopic electronic devices. In this thesis, the possibility of using ion and electron irradiation as a tool to introduce defects in nanotubes in a controllable manner and to achieve these goals is explored. Defects are known to modify the electronic properties of carbon nanotubes. Some defects are always present in pristine nanotubes, and naturally are introduced during irradiation. Obviously, their density can be controlled by irradiation dose. Since different types of defects have very different effects on the conductivity, knowledge of their abundance as induced by ion irradiation is central for controlling the conductivity. In this thesis, the response of single walled carbon nanotubes to ion irradiation is studied. It is shown that, indeed, by energy selective irradiation the conductance can be controlled. Not only the conductivity, but the local electronic structure of single walled carbon nanotubes can be changed by the defects. The presented studies show a variety of changes in the electronic structures of semiconducting single walled nanotubes, varying from individual new states in the band gap to changes in the band gap width. The extensive simulation results for various types of defect make it possible to unequivocally identify defects in single walled carbon nanotubes by combining electronic structure calculations and scanning tunneling spectroscopy, offering a reference data for a wide scientific community of researchers studying nanotubes with surface probe microscopy methods. In electronics applications, carbon nanotubes have to be interconnected to the macroscopic world via metal contacts. Interactions between the nanotubes and metal particles are also essential for nanotube synthesis, as single walled nanotubes are always grown from metal catalyst particles. In this thesis, both growth and creation of nanotube-metal nanoparticle interconnects driven by electron irradiation is studied. Surface curvature and the size of metal nanoparticles is demonstrated to determine the local carbon solubility in these particles. As for nanotube-metal contacts, previous experiments have proved the possibility to create junctions between carbon nanotubes and metal nanoparticles under irradiation in a transmission electron microscope. In this thesis, the microscopic mechanism of junction formation is studied by atomistic simulations carried out at various levels of sophistication. It is shown that structural defects created by the electron beam and efficient reconstruction of the nanotube atomic network, inherently related to the nanometer size and quasi-one dimensional structure of nanotubes, are the driving force for junction formation. Thus, the results of this thesis not only address practical aspects of irradiation-mediated engineering of nanosystems, but also contribute to our understanding of the behaviour of point defects in low-dimensional nanoscale materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation we study the interaction between Saturn's moon Titan and the magnetospheric plasma and magnetic field. The method of research is a three-dimensional computer simulation model, that is used to simulate this interaction. The simulation model used is a hybrid model. Hybrid models enable individual tracking or tracing of ions and also take into account the particle motion in the propagation of the electromagnetic fields. The hybrid model has been developed at the Finnish Meteorological Institute. This thesis gives a general description of the effects that the solar wind has on Earth and other planets of our solar system. Planetary satellites can also have similar interactions with the solar wind but also with the plasma flows of planetary magnetospheres. Titan is clearly the largest among the satellites of Saturn and also the only known satellite with a dense atmosphere. It is the atmosphere that makes Titan's plasma interaction with the magnetosphere of Saturn so unique. Nevertheless, comparisons with the plasma interactions of other solar system bodies are valuable. Detecting charged plasma particles requires in situ measurements obtainable through scientific spacecraft. The Cassini mission has been one of the most remarkable international efforts in space science. Since 2004 the measurements and images obtained from instruments onboard the Cassini spacecraft have increased the scientific knowledge of Saturn as well as its satellites and magnetosphere in a way no one was probably able to predict. The current level of science on Titan is practically unthinkable without the Cassini mission. Many of the observations by Cassini instrument teams have influenced this research both the direct measurements of Titan as well as observations of its plasma environment. The theoretical principles of the hybrid modelling approach are presented in connection to the broader context of plasma simulations. The developed hybrid model is described in detail: e.g. the way the equations of the hybrid model are solved is shown explicitly. Several simulation techniques, such as the grid structure and various boundary conditions, are discussed in detail as well. The testing and monitoring of simulation runs is presented as an essential routine when running sophisticated and complex models. Several significant improvements of the model, that are in preparation, are also discussed. A main part of this dissertation are four scientific articles based on the results of the Titan model. The Titan model developed during the course of the Ph.D. research has been shown to be an important tool to understand Titan's plasma interaction. One reason for this is that the structures of the magnetic field around Titan are very much three-dimensional. The simulation results give a general picture of the magnetic fields in the vicinity of Titan. The magnetic fine structure of Titan's wake as seen in the simulations seems connected to Alfvén waves an important wave mode in space plasmas. The particle escape from Titan is also a major part of these studies. Our simulations show a bending or turning of Titan's ionotail that we have shown to be a direct result of the basic principles in plasma physics. Furthermore, the ion flux from the magnetosphere of Saturn into Titan's upper atmosphere has been studied. The modelled ion flux has asymmetries that would likely have a large impact in the heating in different parts of Titan's upper atmosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.