30 resultados para T-way testing
Resumo:
Technical or contaminated ethanol products are sometimes ingested either accidentally or on purpose. Typical misused products are black-market liquor and automotive products, e.g., windshield washer fluids. In addition to less toxic solvents, these liquids may contain the deadly methanol. Symptoms of even lethal solvent poisoning are often non-specific at the early stage. The present series of studies was carried out to develop a method for solvent intoxication breath diagnostics to speed up the diagnosis procedure conventionally based on blood tests. Especially in the case of methanol ingestion, the analysis method should be sufficiently sensitive and accurate to determine the presence of even small amounts of methanol from the mixture of ethanol and other less-toxic components. In addition to the studies on the FT-IR method, the Dräger 7110 evidential breath analyzer was examined to determine its ability to reveal a coexisting toxic solvent. An industrial Fourier transform infrared analyzer was modified for breath testing. The sample cell fittings were widened and the cell size reduced in order to get an alveolar sample directly from a single exhalation. The performance and the feasibility of the Gasmet FT-IR analyzer were tested in clinical settings and in the laboratory. Actual human breath screening studies were carried out with healthy volunteers, inebriated homeless men, emergency room patients and methanol-intoxicated patients. A number of the breath analysis results were compared to blood test results in order to approximate the blood-breath relationship. In the laboratory experiments, the analytical performance of the Gasmet FT-IR analyzer and Dräger 7110 evidential breath analyzer was evaluated by means of artificial samples resembling exhaled breath. The investigations demonstrated that a successful breath ethanol analysis by Dräger 7110 evidential breath analyzer could exclude any significant methanol intoxication. In contrast, the device did not detect very high levels of acetone, 1-propanol and 2-propanol in simulated breath. The Dräger 7110 evidential breath ethanol analyzer was not equipped to recognize the interfering component. According to the studies the Gasmet FT-IR analyzer was adequately sensitive, selective and accurate for solvent intoxication diagnostics. In addition to diagnostics, the fast breath solvent analysis proved feasible for controlling the ethanol and methanol concentration during haemodialysis treatment. Because of the simplicity of the sampling and analysis procedure, non-laboratory personnel, such as police officers or social workers, could also operate the analyzer for screening purposes.
Resumo:
The autonomic nervous system is an important modulator of ventricular repolarization and arrhythmia vulnerability. This study explored the effects of cardiovascular autonomic function tests on repolarization and its heterogeneity, with a special reference to congenital arrhythmogenic disorders typically associated with stress-induced fatal ventricular arrhythmias. The first part explored the effects of standardized autonomic tests on QT intervals in a 12-lead electrocardiogram and in multichannel magnetocardiography in 10 healthy adults. The second part studied the effects of deep breathing, Valsalva manouvre, mental stress, sustained handgrip and mild exercise on QT intervals in asymptomatic patients with LQT1 subtype of the hereditary long QT syndrome (n=9) and in patients with arrhythmogenic right ventricular dysplasia (ARVD, n=9). Even strong sympathetic activation had no effects on spatial QT interval dispersion in healthy subjects, but deep respiratory efforts and Valsalva influenced it in ways that were opposite in electrocardiographic and magnetocardiographic recordings. LQT1 patients showed blunted QT interval and sinus nodal responses to sympathetic challenge, as well as an exaggerated QT prolongation during the recovery phases. LQT1 patients showed a QT interval recovery overshoot in 2.4 ± 1.7 tests compared with 0.8 ± 0.7 in healthy controls (P = 0.02). Valsalva strain prolonged the T wave peak to T wave end interval only in the LQT1 patients, considered to reflect the arrhythmogenic substrate in this syndrome. ARVD patients showed signs of abnormal repolarization in the right ventricle, modulated by abrupt sympathetic activation. An electrocardiographic marker reflecting interventricular dispersion of repolarization was introduced. It showed that LQT1 patients exhibit a repolarization gradient from the left ventricle towards the right ventricle, significantly larger than in controls. In contrast, ARVD patients showed a repolarization gradient from the right ventricle towards the left. Valsalva strain amplified the repolarization gradient in LQT1 patients whereas it transiently reversed it in patients with ARVD. In conclusion, intrathoracic volume and pressure changes influence regional electrocardiographic and magnetocardiographic QT interval measurements differently. Especially recovery phases of standard cardiovascular autonomic functions tests and Valsalva manoeuvre reveal the abnormal repolarization in asymptomatic LQT1 patients. Both LQT1 and ARVD patients have abnormal interventricular repolarization gradients, modulated by abrupt sympathetic activation. Autonomic testing and in particular the Valsalva manoeuvre are potentially useful in unmasking abnormal repolarization in these syndromes.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.
Resumo:
In this dissertation we study the interaction between Saturn's moon Titan and the magnetospheric plasma and magnetic field. The method of research is a three-dimensional computer simulation model, that is used to simulate this interaction. The simulation model used is a hybrid model. Hybrid models enable individual tracking or tracing of ions and also take into account the particle motion in the propagation of the electromagnetic fields. The hybrid model has been developed at the Finnish Meteorological Institute. This thesis gives a general description of the effects that the solar wind has on Earth and other planets of our solar system. Planetary satellites can also have similar interactions with the solar wind but also with the plasma flows of planetary magnetospheres. Titan is clearly the largest among the satellites of Saturn and also the only known satellite with a dense atmosphere. It is the atmosphere that makes Titan's plasma interaction with the magnetosphere of Saturn so unique. Nevertheless, comparisons with the plasma interactions of other solar system bodies are valuable. Detecting charged plasma particles requires in situ measurements obtainable through scientific spacecraft. The Cassini mission has been one of the most remarkable international efforts in space science. Since 2004 the measurements and images obtained from instruments onboard the Cassini spacecraft have increased the scientific knowledge of Saturn as well as its satellites and magnetosphere in a way no one was probably able to predict. The current level of science on Titan is practically unthinkable without the Cassini mission. Many of the observations by Cassini instrument teams have influenced this research both the direct measurements of Titan as well as observations of its plasma environment. The theoretical principles of the hybrid modelling approach are presented in connection to the broader context of plasma simulations. The developed hybrid model is described in detail: e.g. the way the equations of the hybrid model are solved is shown explicitly. Several simulation techniques, such as the grid structure and various boundary conditions, are discussed in detail as well. The testing and monitoring of simulation runs is presented as an essential routine when running sophisticated and complex models. Several significant improvements of the model, that are in preparation, are also discussed. A main part of this dissertation are four scientific articles based on the results of the Titan model. The Titan model developed during the course of the Ph.D. research has been shown to be an important tool to understand Titan's plasma interaction. One reason for this is that the structures of the magnetic field around Titan are very much three-dimensional. The simulation results give a general picture of the magnetic fields in the vicinity of Titan. The magnetic fine structure of Titan's wake as seen in the simulations seems connected to Alfvén waves an important wave mode in space plasmas. The particle escape from Titan is also a major part of these studies. Our simulations show a bending or turning of Titan's ionotail that we have shown to be a direct result of the basic principles in plasma physics. Furthermore, the ion flux from the magnetosphere of Saturn into Titan's upper atmosphere has been studied. The modelled ion flux has asymmetries that would likely have a large impact in the heating in different parts of Titan's upper atmosphere.
Resumo:
Background: The national resuscitation guidelines were published in Finland in 2002 and are based on international guidelines published in 2000. The main goal of the national guidelines, available on the Internet free of charge, is early defibrillation by nurses in an institutional setting. Aim: To study possible changes in cardiopulmonary resuscitation (CPR) practices, especially concerning early defibrillation, nurses and students attitudes of guideline implementation and nurses and students ability to implement the guideline recommendations in clinical practices after publication of the Current Care (CC) guidelines for CPR 2002. Material and methods: CPR practices in Finnish health centres; especially concerning rapid defibrillation programmes, as well as the implementation of CC guidelines for CPR was studied in a mail survey to chief physicians of every health centre in Finland (Study I). The CPR skills using an automated external defibrillator (AED) were compared in a study including Objective stuctured clinical examination (OSCE) of resuscitation skills of nurses and nursing students in Finnish and Swedish hospital and institution (Studies II, III). Attitudes towards CPR-D and CPR guidelines among medical and nursing students and secondary hospital nurses were studied in surveys (Studies IV, V). The nurses receiving different CPR training were compared in a randomized trial including OSCE of CPR skills of nurses in Finnish Hospital (Study VI). Results: Two years after the publication, 40.7% of Finnish health centres used national resuscitation guidelines. The proportion of health centres having at least one AED (66%) and principle of nurse-performed defibrillation without the presence of a physician (42%) had increased. The CPR-D training was estimated to be insufficient regarding basic life support and advanced life support in the majority of health centres (Study I). CPR-D skills of nurses and nursing students in two specific Swedish and Finnish hospitals and institutions (Study II and III) were generally inadequate. The nurses performed better than the students and the Swedish nurses surpassed the Finnish ones. Geriatric nurses receiving traditional CPR-D training performed better than those receiving an Internet-based course but both groups failed to defibrillate within 60 s. Thus, the performance was not satisfactory even two weeks after traditional training (Study VI). Unlike the medical students, the nursing students did not feel competent to perform procedures recommended in the cardiopulmonary resuscitation guidelines including the defibrillation. However, the majority of nursing students felt confident about their ability to perform basic life support. The perceived ability to defibrillate correlated significantly with a positive attitude towards nurse-performed defibrillation and negatively with fear of damaging the patient s heart by defibrillation (Study IV). After the educational intervention, the nurses found their level of CPR-D capability more sufficient than before and felt more confident about their ability to perform defibrillation themselves. A negative attitude toward defibrillation correlated with perceived negative organisational attitudes toward cardiopulmonary resuscitation guidelines. After CPR-D education in the hospital, the majority (64%) of nurses hesitated to perform defibrillation because of anxiety and 27 % hesitated because of fear of injuring the patient. Also a negative personal attitude towards guidelines increased markedly after education (Study V). Conclusions: Although a significant change had occurred in resuscitation practices in primary health care after publication of national cardiopulmonary resuscitation guidelines the participants CPR-D skills were not adequate according to the CPR guidelines. The current way of teaching is unlikely to result in participants being able to perform adequate and rapid CPR-D. More information and more frequent training are needed to diminish anxiety concerning defibrillation. Negative beliefs and attitudes toward defibrillation affect the nursing students and nurses attitudes toward cardiopulmonary resuscitation guidelines. CPR-D education increased the participants self-confidence concerning CPR-D skills but it did not reduce their anxiety. AEDs have replaced the manual defibrillators in most institutions, but in spite of the modern devices the anxiety still exists. Basic education does not provide nursing students with adequate CPR-D skills. Thus, frequent training in the workplace has vital importance. This multi-professional program supported by the administration might provide better CPR-D skills. Distance learning alone cannot substitute for traditional small-group learning, tutored hands-on training is needed to learn practical CPR-D skills. Standardized testing would probably help controlling the quality of learning. Training of group-working skills might improve CPR performance.
Resumo:
11β-hydroksisteroididehydrogenaasientsyymit (11β-HSD) 1 ja 2 säätelevät kortisonin ja kortisolin määrää kudoksissa. 11β-HSD1 -entsyymin ylimäärä erityisesti viskeraalisessa rasvakudoksessa aiheuttaa metaboliseen oireyhtymän klassisia oireita, mikä tarjoaa mahdollisuuden metabolisen oireyhtymän hoitoon 11β-HSD1 -entsyymin selektiivisellä estämisellä. 11β-HSD2 -entsyymin inhibitio aiheuttaa kortisonivälitteisen mineralokortikoidireseptorien aktivoitumisen, mikä puolestaan johtaa hypertensiivisiin haittavaikutuksiin. Haittavaikutuksista huolimatta 11β-HSD2 -entsyymin estäminen saattaa olla hyödyllistä tilanteissa, joissa halutaan nostaa kortisolin määrä elimistössä. Lukuisia selektiivisiä 11β-HSD1 inhibiittoreita on kehitetty, mutta 11β-HSD2-inhibiittoreita on raportoitu vähemmän. Ero näiden kahden isotsyymin aktiivisen kohdan välillä on myös tuntematon, mikä vaikeuttaa selektiivisten inhibiittoreiden kehittämistä kummallekin entsyymille. Tällä työllä oli kaksi tarkoitusta: (1) löytää ero 11β-HSD entsyymien välillä ja (2) kehittää farmakoforimalli, jota voitaisiin käyttää selektiivisten 11β-HSD2 -inhibiittoreiden virtuaaliseulontaan. Ongelmaa lähestyttiin tietokoneavusteisesti: homologimallinnuksella, pienmolekyylien telakoinnilla proteiiniin, ligandipohjaisella farmakoforimallinnuksella ja virtuaaliseulonnalla. Homologimallinnukseen käytettiin SwissModeler -ohjelmaa, ja luotu malli oli hyvin päällekäinaseteltavissa niin templaattinsa (17β-HSD1) kuin 11β-HSD1 -entsyymin kanssa. Eroa entsyymien välillä ei löytynyt tarkastelemalla päällekäinaseteltuja entsyymejä. Seitsemän yhdistettä, joista kuusi on 11β-HSD2 -selektiivisiä, telakoitiin molempiin entsyymeihin käyttäen ohjelmaa GOLD. 11β-HSD1 -entsyymiin yhdisteet kiinnittyivät kuten suurin osa 11β-HSD1 -selektiivisistä tai epäselektiivisistä inhibiittoreista, kun taas 11β-HSD2 -entsyymiin kaikki yhdisteet olivat telakoituneet käänteisesti. Tällainen sitoutumistapa mahdollistaa vetysidokset Ser310:een ja Asn171:een, aminohappoihin, jotka olivat nähtävissä vain 11β-HSD2 -entsyymissä. Farmakoforimallinnukseen käytettiin ohjelmaa LigandScout3.0, jolla ajettiin myös virtuaaliseulonnat. Luodut kaksi farmakoforimallia, jotka perustuivat aiemmin telakointiinkin käytettyihin kuuteen 11β-HSD2 -selektiiviseen yhdisteeseen, koostuivat kuudesta ominaisuudesta (vetysidosakseptori, vetysidosdonori ja hydrofobinen), ja kieltoalueista. 11β-HSD2 -selektiivisyyden kannalta tärkeimmät ominaisuudet ovat vetysidosakseptori, joka voi muodostaa sidoksen Ser310 kanssa ja vetysidosdonori sen vieressä. Tälle vetysidosdonorille ei löytynyt vuorovaikutusparia 11β-HSD2-mallista. Sopivasti proteiiniin orientoitunut vesimolekyyli voisi kuitenkin olla sopiva ratkaisu puuttuvalle vuorovaikutusparille. Koska molemmat farmakoforimallit löysivät 11β-HSD2 -selektiivisiä yhdisteitä ja jättivät epäselektiivisiä pois testiseulonnassa, käytettiin molempia malleja Innsbruckin yliopistossa säilytettävistä yhdisteistä (2700 kappaletta) koostetun tietokannan seulontaan. Molemmista seulonnoista löytyneistä hiteistä valittiin yhteensä kymmenen kappaletta, jotka lähetettiin biologisiin testeihin. Biologisien testien tulokset vahvistavat lopullisesti sen kuinka hyvin luodut mallit edustavat todellisuudessa 11β-HSD2 -selektiivisyyttä.
Resumo:
The nutritional quality of the product as well as other quality attributes like microbiological and sensory quality are essential factors in baby food industry, and therefore different alternative sterilizing methods for conventional heating processes are of great interest in this food sector. This report gives an overview on different sterilization techniques for baby food. The report is a part of the work done in work package 3 ”QACCP Analysis Processing: Quality – driven distribution and processing chain analysis“ in the Core Organic ERANET project called Quality analysis of critical control points within the whole food chain and their impact on food quality, safety and health (QACCP). The overall objective of the project is to optimise organic production and processing in order to improve food safety as well as nutritional quality and increase health promoting aspects in consumer products. The approach will be a chain analysis approach which addresses the link between farm and fork and backwards from fork to farm. The objective is to improve product related quality management in farming (towards testing food authenticity) and processing (towards food authenticity and sustainable processes. The articles in this volume do not necessarily reflect the Core Organic ERANET’s views and in no way anticipate the Core Organic ERANET’s future policy in this area. The contents of the articles in this volume are the sole responsibility of the authors. The information contained here in, including any expression of opinion and any projection or forecast, has been obtained from sources believed by the authors to be reliable but is not guaranteed as to accuracy or completeness. The information is supplied without obligation and on the understanding that any person who acts upon it or otherwise changes his/her position in reliance thereon does so entirely at his/her own risk. The writers gratefully acknowledge the financial support from the Core Organic Funding Body: Ministry of Agriculture and Forestry, Finland, Swiss Federal Office for Agriculture, Switzerland and Federal Ministry of Consumer Protection, Food and Agriculture, Germany.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
Increased media exposure to layoffs and corporate quarterly financial reporting have created arguable a common perception – especially favored by the media itself – that the companies have been forced to improve their financial performance from quarter to quarter. Academically the relevant question is whether the companies themselves feel that they are exposed to short-term pressure to perform even if it means that they have to compromise company’s long-term future. This paper studies this issue using results from a survey conducted among the 500 largest companies in Finland. The results show that companies in general feel moderate short-term pressure, with reasonable dispersion across firms. There seems to be a link between the degree of pressure felt, and the firm’s ownership structure, i.e. we find support for the existence of short-term versus long-term owners. We also find significant ownership related differences, in line with expectations, in how such short-term pressure is reflected in actual decision variables such as the investment criteria used.
Resumo:
The overlapping sound pressure waves that enter our brain via the ears and auditory nerves must be organized into a coherent percept. Modelling the regularities of the auditory environment and detecting unexpected changes in these regularities, even in the absence of attention, is a necessary prerequisite for orientating towards significant information as well as speech perception and communication, for instance. The processing of auditory information, in particular the detection of changes in the regularities of the auditory input, gives rise to neural activity in the brain that is seen as a mismatch negativity (MMN) response of the event-related potential (ERP) recorded by electroencephalography (EEG). --- As the recording of MMN requires neither a subject s behavioural response nor attention towards the sounds, it can be done even with subjects with problems in communicating or difficulties in performing a discrimination task, for example, from aphasic and comatose patients, newborns, and even fetuses. Thus with MMN one can follow the evolution of central auditory processing from the very early, often critical stages of development, and also in subjects who cannot be examined with the more traditional behavioural measures of auditory discrimination. Indeed, recent studies show that central auditory processing, as indicated by MMN, is affected in different clinical populations, such as schizophrenics, as well as during normal aging and abnormal childhood development. Moreover, the processing of auditory information can be selectively impaired for certain auditory attributes (e.g., sound duration, frequency) and can also depend on the context of the sound changes (e.g., speech or non-speech). Although its advantages over behavioral measures are undeniable, a major obstacle to the larger-scale routine use of the MMN method, especially in clinical settings, is the relatively long duration of its measurement. Typically, approximately 15 minutes of recording time is needed for measuring the MMN for a single auditory attribute. Recording a complete central auditory processing profile consisting of several auditory attributes would thus require from one hour to several hours. In this research, I have contributed to the development of new fast multi-attribute MMN recording paradigms in which several types and magnitudes of sound changes are presented in both speech and non-speech contexts in order to obtain a comprehensive profile of auditory sensory memory and discrimination accuracy in a short measurement time (altogether approximately 15 min for 5 auditory attributes). The speed of the paradigms makes them highly attractive for clinical research, their reliability brings fidelity to longitudinal studies, and the language context is especially suitable for studies on language impairments such as dyslexia and aphasia. In addition I have presented an even more ecological paradigm, and more importantly, an interesting result in view of the theory of MMN where the MMN responses are recorded entirely without a repetitive standard tone. All in all, these paradigms contribute to the development of the theory of auditory perception, and increase the feasibility of MMN recordings in both basic and clinical research. Moreover, they have already proven useful in studying for instance dyslexia, Asperger syndrome and schizophrenia.
Resumo:
Drug induced liver injury is one of the frequent reasons for the drug removal from the market. During the recent years there has been a pressure to develop more cost efficient, faster and easier ways to investigate drug-induced toxicity in order to recognize hepatotoxic drugs in the earlier phases of drug development. High Content Screening (HCS) instrument is an automated microscope equipped with image analysis software. It makes the image analysis faster and decreases the risk for an error caused by a person by analyzing the images always in the same way. Because the amount of drug and time needed in the analysis are smaller and multiple parameters can be analyzed from the same cells, the method should be more sensitive, effective and cheaper than the conventional assays in cytotoxicity testing. Liver cells are rich in mitochondria and many drugs target their toxicity to hepatocyte mitochondria. Mitochondria produce the majority of the ATP in the cell through oxidative phosphorylation. They maintain biochemical homeostasis in the cell and participate in cell death. Mitochondria is divided into two compartments by inner and outer mitochondrial membranes. The oxidative phosphorylation happens in the inner mitochondrial membrane. A part of the respiratory chain, a protein called cytochrome c, activates caspase cascades when released. This leads to apoptosis. The aim of this study was to implement, optimize and compare mitochondrial toxicity HCS assays in live cells and fixed cells in two cellular models: human HepG2 hepatoma cell line and rat primary hepatocytes. Three different hepato- and mitochondriatoxic drugs (staurosporine, rotenone and tolcapone) were used. Cells were treated with the drugs, incubated with the fluorescent probes and then the images were analyzed using Cellomics ArrayScan VTI reader. Finally the results obtained after optimizing methods were compared to each other and to the results of the conventional cytotoxicity assays, ATP and LDH measurements. After optimization the live cell method and rat primary hepatocytes were selected to be used in the experiments. Staurosporine was the most toxic of the three drugs and caused most damage to the cells most quickly. Rotenone was not that toxic, but the results were more reproducible and thus it would serve as a good positive control in the screening. Tolcapone was the least toxic. So far the conventional analysis of cytotoxicity worked better than the HCS methods. More optimization needs to be done to get the HCS method more sensitive. This was not possible in this study due to time limit.
Resumo:
Perunalla (Solanum tuberosum L.) tällä hetkellä maailmanlaajuisesti eniten sato- ja laatutappioita aiheuttaa perunan Y-virus (PVY). Vaikka pelkän Y-viruksen aiheuttamaa satotappiota on vaikea mitata, on sen arvioitu olevan 20-80 %. Viruksen tärkein leviämistapa on viroottinen siemenperuna. Korkealaatuinen siemenperuna on edellytys ruoka-, ruokateollisuus- ja tärkkelysperunan tuotannolle. Kasvuston silmämääräinen tarkastelu aliarvioi yleensä Y-viruksen esiintyvyyttä. Laboratoriotestauksen avulla saadaan tarkempi tieto pellolta korjatun sadon saastunta-asteesta. Ongelmana Y-viruksen testaamisessa on, että sitä ei havaita dormanssissa olevista perunoista otetuista näytteistä yhtä luotettavasti kuin jo dormanssin ohittaneista perunoista testattaessa. Erilaisia menetelmiä kemikaaleista (Rindite, bromietaani) kasvihormoneihin (mm. gibberelliinihappo) ja varastointiolosuhteiden muutoksiin (kylmä- ja lämpökäsittely) on kokeiltu perunan dormanssin purkamiseen, mutta tulokset ovat olleet vaihtelevia. Tässä tutkielmassa perunan dormanssin purkamiseen käytettiin happi-hiilidioksidikäsittelyä (O2 40 % ja CO2 20 %) eripituisina käsittelyaikoina. Tarkoituksena oli selvittää, vaikuttaako käsittely perunan itämiseen ja dormanssin luontaista aikaisempaan purkautumiseen tai Y-viruksen havaitsemiseen. Lisäksi haluttiin selvittää, voiko Y-viruksen määrittämisen ELISA-testillä (Enzyme Linked Immunosorbent Assay) tehdä yhtä luotettavasti myös muista kasvinosista (mukula, itu), kuin tällä hetkellä yleisesti käytetystä perunan lehdestä. Idätyskäsittelyn vaikutuksista dormanssin purkautumiseen saatiin vaihtelevia, eikä kovinkaan yleistettäviä tuloksia. Käsittelyn ei myöskään havaittu vaikuttavan PYY-viroottisuuden havaitsemiseen eri näytemateriaaleilla testattaessa. Kun eri kasvinosien toimivuutta testissä vertailtiin, mukulamateriaalin todettiin aliarvioivan PVY-viroottisuutta kaikissa kokeissa. Myös itumateriaali aliarvioi pääsääntöisesti PVY-viroottisuutta ELISA:lla tehdyissä määrityksissä. Luotettavin testimateriaali oli perunan lehti.
Resumo:
Abstract. Methane emissions from natural wetlands and rice paddies constitute a large proportion of atmospheric methane, but the magnitude and year-to-year variation of these methane sources is still unpredictable. Here we describe and evaluate the integration of a methane biogeochemical model (CLM4Me; Riley et al., 2011) into the Community Land Model 4.0 (CLM4CN) in order to better explain spatial and temporal variations in methane emissions. We test new functions for soil pH and redox potential that impact microbial methane production in soils. We also constrain aerenchyma in plants in always-inundated areas in order to better represent wetland vegetation. Satellite inundated fraction is explicitly prescribed in the model because there are large differences between simulated fractional inundation and satellite observations. A rice paddy module is also incorporated into the model, where the fraction of land used for rice production is explicitly prescribed. The model is evaluated at the site level with vegetation cover and water table prescribed from measurements. Explicit site level evaluations of simulated methane emissions are quite different than evaluating the grid cell averaged emissions against available measurements. Using a baseline set of parameter values, our model-estimated average global wetland emissions for the period 1993–2004 were 256 Tg CH4 yr−1, and rice paddy emissions in the year 2000 were 42 Tg CH4 yr−1. Tropical wetlands contributed 201 Tg CH4 yr−1, or 78 % of the global wetland flux. Northern latitude (>50 N) systems contributed 12 Tg CH4 yr−1. We expect this latter number may be an underestimate due to the low high-latitude inundated area captured by satellites and unrealistically low high-latitude productivity and soil carbon predicted by CLM4. Sensitivity analysis showed a large range (150–346 Tg CH4 yr−1) in predicted global methane emissions. The large range was sensitive to: (1) the amount of methane transported through aerenchyma, (2) soil pH (± 100 Tg CH4 yr−1), and (3) redox inhibition (± 45 Tg CH4 yr−1).