16 resultados para spray schedules
Resumo:
Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
An important safety aspect to be considered when foods are enriched with phytosterols and phytostanols is the oxidative stability of these lipid compounds, i.e. their resistance to oxidation and thus to the formation of oxidation products. This study concentrated on producing scientific data to support this safety evaluation process. In the absence of an official method for analyzing of phytosterol/stanol oxidation products, we first developed a new gas chromatographic - mass spectrometric (GC-MS) method. We then investigated factors affecting these compounds' oxidative stability in lipid-based food models in order to identify critical conditions under which significant oxidation reactions may occur. Finally, the oxidative stability of phytosterols and stanols in enriched foods during processing and storage was evaluated. Enriched foods covered a range of commercially available phytosterol/stanol ingredients, different heat treatments during food processing, and different multiphase food structures. The GC-MS method was a powerful tool for measuring the oxidative stability. Data obtained in food model studies revealed that the critical factors for the formation and distribution of the main secondary oxidation products were sterol structure, reaction temperature, reaction time, and lipid matrix composition. Under all conditions studied, phytostanols as saturated compounds were more stable than unsaturated phytosterols. In addition, esterification made phytosterols more reactive than free sterols at low temperatures, while at high temperatures the situation was the reverse. Generally, oxidation reactions were more significant at temperatures above 100°C. At lower temperatures, the significance of these reactions increased with increasing reaction time. The effect of lipid matrix composition was dependent on temperature; at temperatures above 140°C, phytosterols were more stable in an unsaturated lipid matrix, whereas below 140°C they were more stable in a saturated lipid matrix. At 140°C, phytosterols oxidized at the same rate in both matrices. Regardless of temperature, phytostanols oxidized more in an unsaturated lipid matrix. Generally, the distribution of oxidation products seemed to be associated with the phase of overall oxidation. 7-ketophytosterols accumulated when oxidation had not yet reached the dynamic state. Once this state was attained, the major products were 5,6-epoxyphytosterols and 7-hydroxyphytosterols. The changes observed in phytostanol oxidation products were not as informative since all stanol oxides quantified represented hydroxyl compounds. The formation of these secondary oxidation products did not account for all of the phytosterol/stanol losses observed during the heating experiments, indicating the presence of dimeric, oligomeric or other oxidation products, especially when free phytosterols and stanols were heated at high temperatures. Commercially available phytosterol/stanol ingredients were stable during such food processes as spray-drying and ultra high temperature (UHT)-type heating and subsequent long-term storage. Pan-frying, however, induced phytosterol oxidation and was classified as a rather deteriorative process. Overall, the findings indicated that although phytosterols and stanols are stable in normal food processing conditions, attention should be paid to their use in frying. Complex interactions between other food constituents also suggested that when new phytosterol-enriched foods are developed their oxidative stability must first be established. The results presented here will assist in choosing safe conditions for phytosterol/stanol enrichment.
Resumo:
Miniaturized mass spectrometric ionization techniques for environmental analysis and bioanalysis Novel miniaturized mass spectrometric ionization techniques based on atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were studied and evaluated in the analysis of environmental samples and biosamples. The three analytical systems investigated here were gas chromatography-microchip atmospheric pressure chemical ionization-mass spectrometry (GC-µAPCI-MS) and gas chromatography-microchip atmospheric pressure photoionization-mass spectrometry (GC-µAPPI-MS), where sample pretreatment and chromatographic separation precede ionization, and desorption atmospheric pressure photoionization-mass spectrometry (DAPPI-MS), where the samples are analyzed either as such or after minimal pretreatment. The gas chromatography-microchip atmospheric pressure ionization-mass spectrometry (GC-µAPI-MS) instrumentations were used in the analysis of polychlorinated biphenyls (PCBs) in negative ion mode and 2-quinolinone-derived selective androgen receptor modulators (SARMs) in positive ion mode. The analytical characteristics (i.e., limits of detection, linear ranges, and repeatabilities) of the methods were evaluated with PCB standards and SARMs in urine. All methods showed good analytical characteristics and potential for quantitative environmental analysis or bioanalysis. Desorption and ionization mechanisms in DAPPI were studied. Desorption was found to be a thermal process, with the efficiency strongly depending on thermal conductivity of the sampling surface. Probably the size and polarity of the analyte also play a role. In positive ion mode, the ionization is dependent on the ionization energy and proton affinity of the analyte and the spray solvent, while in negative ion mode the ionization mechanism is determined by the electron affinity and gas-phase acidity of the analyte and the spray solvent. DAPPI-MS was tested in the fast screening analysis of environmental, food, and forensic samples, and the results demonstrated the feasibility of DAPPI-MS for rapid screening analysis of authentic samples.
Resumo:
In this study we used electro-spray ionization mass-spectrometry to determine phospholipid class and molecular species compositions in bacteriophages PM2, PRD1, Bam35 and phi6 as well as their hosts. To obtain compositional data of the individual leaflets, phospholipid transbilayer distribution in the viral membranes was studied. We found that 1) the membranes of all studied bacteriophage are enriched in PG as compared to the host membranes, 2) molecular species compositions in the phage and host membranes are similar, and 3) phospholipids in the viral membranes are distributed asymmetrically with phosphatidylglycerol enriched in the outer leaflet and phosphatidylethanolamine in the inner one (except Bam35). Alternative models for selective incorporation of phospholipids to phages and for the origins of the asymmetric phospholipid transbilayer distribution are discussed. Notably, the present data are also useful when constructing high resolution structural models of bacteriophages, since diffraction methods cannot provide a detailed structure of the membrane due to high motility of the lipids and lack of symmetric organization of membrane proteins.
Resumo:
Migraine is a common disease in children and adolescents, affecting roughly 10% of school-aged children. Recent studies have revealed an increasing incidence of childhood migraine, but migraine remains an underrecognized and undertreated condition in the pediatric population. Migraine attacks are painful and disabling and can affect a child´s life in many ways. Effective drug treatment is usually needed. The new migraine drugs, triptans, were introduced at the beginning of the 1990s and have since been shown to be very effective in the treatment of migraine attacks in adults. Although they are widely used in adults, the acute treatment of migraine in children and adolescents is still based on paracetamol and nonsteroidal anti-inflammatory drugs. Some children can control their attacks satisfactorily with simple analgesics, but at least one-third need more powerful treatments. When this thesis work commenced, hardly any information existed on the efficacy and safety of triptans in children. The study aim of the thesis was to identify more efficient treatments of migraine for children and adolescents by investigating the efficacy of sumatriptan nasal spray and oral rizatriptan compared with placebo in them. Sleep has an impact on migraine in many aspects. Despite the clinical relevance and common manifestation of sleep in the context of migraine in children, very little research data on the true frequency of sleep exist. As sleeping is so often related to childhood migraine, it can be a confounding factor in clinical drug trials of migraine treatments in children and adolescents. How the results of a sleeping child should be analyzed is under continual debate. The aim of the thesis was also to clarify this as well as to evaluate the frequency of sleeping during migraine attacks in children and factors affecting frequency. Both nasal sumatriptan and oral rizatriptan were effective (superior to placebo), and well tolerated in treatment of migraine attacks in children and adolescents aged 8-17 and 6-17 years, respectively. No serous adverse effects were observed. The results of this work suggest that nasal sumatriptan 20 mg and rizatriptan 10 mg can be effectively and safely used to treat migraine attacks in adolescents aged over 12 years if more effective drugs than NSAIDs are needed. No difference was observed in efficacy or safety of nasal sumatriptan and rizatriptan between children aged younger than 12 years and older children, but because the treated number of patients under 12 years is still small, more studies are needed before sumatriptan or rizatriptan can be recommended for use in this population. Sleeping during migraine attacks was very common, and most children at least occasionally slept during an attack. Falling asleep was especially common in children under eight years of age and during the first hour after the onset of attack. Children who were able to sleep soon after attack onset were more likely pain-free at two hours. Sleeping probably both improves recovery from a migraine attack and is a sign of headache relief. Falling asleep should be classified as a sign of headache relief in clinical drug trials when studying migraine treatments in children and adolescents.
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
This study is part of the Mood Disorders Project conducted by the Department of Mental Health and Alcohol Research, National Public Health Institute, and consists of a general population survey sample and a major depressive disorder (MDD) patient cohort from Vantaa Depression Study (VDS). The general population survey study was conducted in 2003 in the cities of Espoo and Vantaa. The VDS is a collaborative depression research project between the Department of Mental Health and Alcohol Research of the National Public Health Institute and the Department of Psychiatry of the Peijas Medical Care District (PMCD) beginning in 1997. It is a prospective, naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients with a new episode of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) MDD. In the general population survey study, a total of 900 participants (300 from Espoo, 600 from Vantaa) aged 20 70 years were randomly drawn from the Population Register Centre in Finland. A self-report booklet, including the Eysenck Personality Inventory (EPI), the Temperament and Character Inventory Revised (TCI-R), the Beck Depression Inventory and the Beck Anxiety Inventory was mailed to all subjects. Altogether 441 participants responded (94 returned only the shortened version without TCI-R) and gave their informed consent. VDS involved screening all patients aged 20-60 years (n=806) in the PMCD for a possible new episode of DSM-IV MDD. 542 consenting patients were interviewed with a semi-structured interview (the WHO Schedules for Clinical Assessment in Neuropsychiatry, version 2.0). 269 patients with a current DSM-IV MDD were included in the study and further interviewed with semi-structured interviews to assess all other axis I and II psychiatric diagnoses. Exclusion criteria were DSM-IV bipolar I and II, schizoaffective disorder, schizophrenia or another psychosis, organic and substance-induced mood disorders. In the present study are included those 193 (139 females, 54 males) individuals who could be followed up at both 6 and 18 months, and their depression had remained unipolar. Personality was investigated with the EPI. Personality dimensions associated not only to the symptoms of depression, but also to the symptoms of anxiety among general population and in depressive patients, as well as to comorbid disorders in MDD patients, supporting the dimensional view of depression and anxiety. Among the general population High Harm Avoidance and low Self-Directedness associated moderately, whereas low extraversion and high neuroticism strongly with the depressive and anxiety symptoms. The personality dimensions, especially high Harm Avoidance, low Self-Directedness and high neuroticism were also somewhat predictive of self-reported use of health care services for psychiatric reasons, and lifetime mental disorder. Moreover, high Harm Avoidance associated with a family history of mental disorder. In depressive patients, neuroticism scores were found to decline markedly and extraversion scores to increase somewhat with recovery. The predictive value of the changes in symptoms of depression and anxiety in explaining follow-up neuroticism was about 1/3 of that of baseline neuroticism. In contrast to neuroticism, the scores of extraversion showed no dependence on the symptoms of anxiety, and the change in the symptoms of depression explained only 1/20 of the follow-up extraversion compared with baseline extraversion. No evidence was found of the scar effect during a one-year follow-up period. Finally, even after controlling for symptoms of both depression and anxiety, depressive patients had a somewhat higher level of neuroticism (odds ratio 1.11, p=0.001) and a slightly lower level of extraversion (odds ratio 0.92, p=0.003) than subjects in the general population. Among MDD patients, a positive dose-exposure relationship appeared to exist between neuroticism and prevalence and number of comorbid axis I and II disorders. A negative relationship existed between level of extraversion and prevalence of comorbid social phobia and cluster C personality disorders. Personality dimensions are associated with the symptoms of depression and anxiety. Futhermore these findings support the hypothesis that high neuroticism and somewhat low extraversion might be vulnerability factors for MDD, and that high neuroticism and low extraversion predispose to comorbid axis I and II disorders among patients with MDD.
Resumo:
The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.
Resumo:
The present collection of articles is based on international conference that was held in Seinäjoki, Finland in February 2009. The topic of the conference was Effective Rural and Urban Policies and it was organised in co-operation between University Consortium of Seinäjoki, Seinäjoki Technology Centre and City of Seinäjoki. The presented papers approached the drivers of regional development from several aspects and in different kind of regional contexts across various countries. As a whole the different contributions formed a comprehensive story of those factors that are shaping the development of both rural and urban regions in global economy. The role of local innovation environment and dynamic of social processes that are ‘oiling’ the interaction between individuals within networks inspired several scholars. Also development of physical infrastructure as well as the recent development of economical models that can predict the regional impacts of large scale investments was discussed in many presentations. Clear focus with cultural and disciplinary diversity formed a fruitful basis for the conference and it was easy to learn something new. On the behalf of all organisers I would like to thank all participants of the conference and especially our foreign colleges who had travelled from distances to spend some winter days in Seinäjoki. As we all know this kind of publication does not appear automatically. All authors have done great job by finding time for writing from their busy schedules. Terttu Poranen and Jaana Huhtala have taken care of the technical editing of this publication. Sari Soini was the main organiser of conference and she has also as a editor kept the required pressure to finalize this book. In addition to University of Helsinki, conference was financially supported by the University of Vaasa, City of Seinäjoki, Lähivakuutus and Regional Centre Programme. These contributions are highly appreciated.
Resumo:
This paper addresses several questions in the compensation literature by examining stock option compensation practices of Finnish firms. First, the results indicate that principal-agent theory succeeds quite well in predicting the use of stock options. Proxies for monitoring costs, growth opportunities, ownership structure, and risk are found to determine the use of incentives consistent with theory. Furthermore, the paper examines whether determinants of stock options targeted to top management differ from determinants of broad-based stock option plans. Some evidence is found that factors driving these two types of incentives differ. Second, the results reveal that systematic risk significantly increases the likelihood that firms adopt stock option plans, whereas total firm risk and unsystematic risk do not seem to affect this decision. Third, the results show that growth opportunities are related to time-dimensional contracting frequency, consistent with the argument that incentive levels deviate more rapidly from optimum in firms with high growth opportunities. Finally, the results suggest that vesting schedules are decreasing in financial leverage, and that contract maturity is decreasing in firm focus. In addition, both vesting schedules and contract maturity tend to be longer in firms involving state ownership.
Resumo:
This dissertation deals with the design, fabrication, and applications of microscale electrospray ionization chips for mass spectrometry. The microchip consists of microchannel, which leads to a sharp electrospray tip. Microchannel contain micropillars that facilitate a powerful capillary action in the channels. The capillary action delivers the liquid sample to the electrospray tip, which sprays the liquid sample to gas phase ions that can be analyzed with mass spectrometry. The microchip uses a high voltage, which can be utilized as a valve between the microchip and mass spectrometry. The microchips can be used in various applications, such as for analyses of drugs, proteins, peptides, or metabolites. The microchip works without pumps for liquid transfer, is usable for rapid analyses, and is sensitive. The characteristics of performance of the single microchips are studied and a rotating multitip version of the microchips are designed and fabricated. It is possible to use the microchip also as a microreactor and reaction products can be detected online with mass spectrometry. This property can be utilized for protein identification for example. Proteins can be digested enzymatically on-chip and reaction products, which are in this case peptides, can be detected with mass spectrometry. Because reactions occur faster in a microscale due to shorter diffusion lengths, the amount of protein can be very low, which is a benefit of the method. The microchip is well suited to surface activated reactions because of a high surface-to-volume ratio due to a dense micropillar array. For example, titanium dioxide nanolayer on the micropillar array combined with UV radiation produces photocatalytic reactions which can be used for mimicking drug metabolism biotransformation reactions. Rapid mimicking with the microchip eases the detection of possibly toxic compounds in preclinical research and therefore could speed up the research of new drugs. A micropillar array chip can also be utilized in the fabrication of liquid chromatographic columns. Precisely ordered micropillar arrays offer a very homogenous column, where separation of compounds has been demonstrated by using both laser induced fluorescence and mass spectrometry. Because of small dimensions on the microchip, the integrated microchip based liquid chromatography electrospray microchip is especially well suited to low sample concentrations. Overall, this work demonstrates that the designed and fabricated silicon/glass three dimensionally sharp electrospray tip is unique and facilitates stable ion spray for mass spectrometry.
Resumo:
Spray irrigation of industrial waste water.
Resumo:
Powders are essential materials in the pharmaceutical industry, being involved in majority of all drug manufacturing. Powder flow and particle size are central particle properties addressed by means of particle engineering. The aim of the thesis was to gain knowledge on powder processing with restricted liquid addition, with a primary focus on particle coating and early granule growth. Furthermore, characterisation of this kind of processes was performed. A thin coating layer of hydroxypropyl methylcellulose was applied on individual particles of ibuprofen in a fluidised bed top-spray process. The polymeric coating improved the flow properties of the powder. The improvement was strongly related to relative humidity, which can be seen as an indicator of a change in surface hydrophilicity caused by the coating. The ibuprofen used in the present study had a d50 of 40 μm and thus belongs to the Geldart group C powders, which can be considered as challenging materials in top-spray coating processes. Ibuprofen was similarly coated using a novel ultrasound-assisted coating method. The results were in line with those obtained from powders coated in the fluidised bed process mentioned above. It was found that the ultrasound-assisted method was capable of coating single particles with a simple and robust setup. Granule growth in a fluidised bed process was inhibited by feeding the liquid in pulses. The results showed that the length of the pulsing cycles is of importance, and can be used to adjust granule growth. Moreover, pulsed liquid feed was found to be of greater significance to granule growth in high inlet air relative humidity. Liquid feed pulsing can thus be used as a tool in particle size targeting in fluidised bed processes and in compensating for changes in relative humidity of the inlet air. The nozzle function of a two-fluid external mixing pneumatic nozzle, typical for small scale pharmaceutical fluidised bed processes, was studied in situ in an ongoing fluidised bed process with particle tracking velocimetry. It was found that the liquid droplets undergo coalescence as they proceed away from the nozzle head. The coalescence was expected to increase droplet speed, which was confirmed in the study. The spray turbulence was studied, and the results showed turbulence caused by the event of atomisation and by the oppositely directed fluidising air. It was concluded that particle tracking velocimetry is a suitable tool for in situ spray characterisation. The light transmission through dense particulate systems was found to carry information on particle size and packing density as expected based on the theory of light scattering by solids. It was possible to differentiate binary blends consisting of components with differences in optical properties. Light transmission showed potential as a rapid, simple and inexpensive tool in characterisation of particulate systems giving information on changes in particle systems, which could be utilised in basic process diagnostics.
Resumo:
Nonstandard hour child care is a subject rarely studied. From an adult's perspective it is commonly associated with a concern for child's wellbeing. The aim of this study was to view nonstandard hour child care and its everyday routines from children´s perspective. Three research questions were set. The first question dealt with structuring of physical environment and time in a kindergarten providing nonstandard hour child care. The second and third questions handled children s agency and social interaction with adults and peers. The research design was qualitative, and the study was carried out as a case study. Research material was mainly obtained through observation, but interviews, photography and written documents were used as well. The material was analysed by means of content analysis. The study suggests that the physical environment and schedule of a kindergarten providing nonstandard hour child care are similar to those of kindergartens in general. The kindergarten's daily routine enabled children s active agency especially during free play sessions for which there was plenty of time. During free play children were able to interact with both adults and peers. Children s individual day care schedules challenged interaction between children. These special features should be considered in developing and planning nonstandard hour child care. In other word, children's agency and opportunities to social interaction should be kept in mind in organising the environment of early childhood education in kindergartens providing nonstandard hour child care.