43 resultados para ANESTHETICS, Volatile: sevoflurane


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkimuksen tarkoituksena oli selvittää millainen kaksion sähkönkulutuskäyrä on ja mistä kuormista se koostuu. Tarkoituksena oli myös tutkia miten kuormansiirrot vaikuttavat sähkönkulutuskäyrään ja onko kuormansiirroissa säästöpotentiaalia ajatellen asiakkaan sähkökustannuksia. Säästöpotentiaalia laskettaessa oletettiin, että asiakkaalla on sähkösopimus, jonka hinta seuraa sähköpörssin (Nord Pool) spot-hintoja. Tutkimustuloksista nähdään, että asunnon pohjakuorma aiheutuu kylmälaitteista ja suuret sähkön kulutuspiikit aiheutuvat liedestä, astianpesukoneesta ja pyykinpesukoneesta. Kuormansiirtojen vaikutus normaaliin sähkönkulutus käyrään on se, että suurten kuormien aiheuttamat kulutuspiikit siirtyivät myöhemmäksi iltaan. Laskelmissa selvisi, että kuormansiirto ei tuo asiakkaalle merkittäviä säästöjä, koska säästöjen saanti edellyttäisi pidempiä mittausjaksoja sekä päivittäisiä suuria hinnanvaihteluita sähkön spot-hinnoissa. Tutkittiin myös millaisia kylmälaitteiden kulutuskäyrät ovat ja miten pysäytykset ja uudelleenkäynnistyksen vaikuttavat niihin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changes in the electroencephalography (EEG) signal have been used to study the effects of anesthetic agents on the brain function. Several commercial EEG based anesthesia depth monitors have been developed to measure the level of the hypnotic component of anesthesia. Specific anesthetic related changes can be seen in the EEG, but still it remains difficult to determine whether the subject is consciousness or not during anesthesia. EEG reactivity to external stimuli may be seen in unconsciousness subjects, in anesthesia or even in coma. Changes in regional cerebral blood flow, which can be measured with positron emission tomography (PET), can be used as a surrogate for changes in neuronal activity. The aim of this study was to investigate the effects of dexmedetomidine, propofol, sevoflurane and xenon on the EEG and the behavior of two commercial anesthesia depth monitors, Bispectral Index (BIS) and Entropy. Slowly escalating drug concentrations were used with dexmedetomidine, propofol and sevoflurane. EEG reactivity at clinically determined similar level of consciousness was studied and the performance of BIS and Entropy in differentiating consciousness form unconsciousness was evaluated. Changes in brain activity during emergence from dexmedetomidine and propofol induced unconsciousness were studied using PET imaging. Additionally, the effects of normobaric hyperoxia, induced during denitrogenation prior to xenon anesthesia induction, on the EEG were studied. Dexmedetomidine and propofol caused increases in the low frequency, high amplitude (delta 0.5-4 Hz and theta 4.1-8 Hz) EEG activity during stepwise increased drug concentrations from the awake state to unconsciousness. With sevoflurane, an increase in delta activity was also seen, and an increase in alpha- slow beta (8.1-15 Hz) band power was seen in both propofol and sevoflurane. EEG reactivity to a verbal command in the unconsciousness state was best retained with propofol, and almost disappeared with sevoflurane. The ability of BIS and Entropy to differentiate consciousness from unconsciousness was poor. At the emergence from dexmedetomidine and propofol induced unconsciousness, activation was detected in deep brain structures, but not within the cortex. In xenon anesthesia, EEG band powers increased in delta, theta and alpha (8-12Hz) frequencies. In steady state xenon anesthesia, BIS and Entropy indices were low and these monitors seemed to work well in xenon anesthesia. Normobaric hyperoxia alone did not cause changes in the EEG. All of these results are based on studies in healthy volunteers and their application to clinical practice should be considered carefully.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The behavioural finance literature expects systematic and significant deviations from efficiency to persist in securities markets due to behavioural and cognitive biases of investors. These behavioural models attempt to explain the coexistence of intermediate-term momentum and long-term reversals in stock returns based on the systematic violations of rational behaviour of investors. The study investigates the anchoring bias of investors and the profitability of the 52-week momentum strategy (GH henceforward). The relatively highly volatile OMX Helsinki stock exchange is a suitable market for examining the momentum effect, since international investors tend to realise their positions first from the furthest security markets by the time of market turbulence. Empirical data is collected from Thomson Reuters Datastream and the OMX Nordic website. The objective of the study is to provide a throughout research by formulating a self-financing GH momentum portfolio. First, the seasonality of the strategy is examined by taking the January effect into account and researching abnormal returns in long-term. The results indicate that the GH strategy is subject to significantly negative revenues in January, but the strategy is not prone to reversals in long-term. Then the predictive proxies of momentum returns are investigated in terms of acquisition prices and 52-week high statistics as anchors. The results show that the acquisition prices do not have explanatory power over the GH strategy’s abnormal returns. Finally, the efficacy of the GH strategy is examined after taking transaction costs into account, finding that the robust abnormal returns remain statistically significant despite the transaction costs. As a conclusion, the relative distance between a stock’s current price and its 52-week high statistic explains the profits of momentum investing to a high degree. The results indicate that intermediateterm momentum and long-term reversals are separate phenomena. This presents a challenge to current behavioural theories, which model these aspects of stock returns as subsequent components of how securities markets respond to relevant information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lateksinvalmistusprosessin aikana syntyvää jätevettä käsitellään täytekappalekolonnissa siinä olevien haihtuvien orgaanisten yhdisteiden poistamiseksi. Käsittelyprosessin aikana jätevedessä oleva kiintoaine kiinnittyy täytekappaleiden pinnalle, lopulta tukkien ne. Täytekappaleiden vaihtotyö sekä likaantuneiden täytekappaleiden pesu aiheuttavat kustannuksia. Lainsäädäntö ja sopimus kunnallisen jäteveden käsittelyn kanssa vaativat, että haihtuvien yhdisteiden päästöt lasketaan tietyn tason alapuolelle. Työn ensimmäisenä tavoitteena oli tutkia lateksitehtaan jätevesivirtojen koostumusta massa- ja ainetaseiden avulla, erityisesti täytekappalekolonnia likaavan aineen osalta. Toisena tavoitteena oli löytää menetelmiä pidentää täytekappalekolonnin ajojaksoa nykyisestä. Kolmantena tavoitteena oli löytää tai kehittää esikäsittelymenetelmä likaavan aineen poistamiseksi ennen täytekappalekolonnia. Viimeisenä tavoitteena oli optimoida prosessin ajotapa, josta saavutettaisiin säästöjä vähentyneenä energiankulutuksena. Tutkimuksen perusteella täytekappalekolonni poistaa syntyvästä jätevedestä haihtuvia orgaanisia yhdisteitä 100 prosenttia sekä kemiallista hapenkulutusta 99,5 prosenttia. Täytekappalekolonnin ajojaksoa voidaan pidentää ennakoimalla kolonnin ylä- ja alapään paine-eron perusteella sen likaantumisastetta ja täytekappaleiden vaihtotarvetta. Tutkimuksen perusteella soveltuvia jäteveden esikäsittelymenetelmiä ovat dekantointi, jossa kuuden tunnin viipymällä poistetaan kiintoainetta sekä hallittu kiintoaineen saostus, jossa kymmenen minuutin viipymällä poistetaan sekä haihtuvia orgaanisia yhdisteitä, että kiintoainetta. Energiankulutusta voidaan optimoida vähentämällä höyryn virtausta täytekappalekolonniin erotustehokkuuden siitä kärsimättä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aims to find an effective way of conducting a target audience analysis (TAA) in cyber domain. There are two main focal points that are addressed; the nature of the cyber domain and the method of the TAA. Of the cyber domain the object is to find the opportunities, restrictions and caveats that result from its digital and temporal nature. This is the environment in which the TAA method is examined in this study. As the TAA is an important step of any psychological operation and critical to its success, the method used must cover all the main aspects affecting the choice of a proper target audience. The first part of the research was done by sending an open-ended questionnaire to operators in the field of information warfare both in Finland and abroad. As the results were inconclusive, the research was completed by assessing the applicability of United States Army Joint Publication FM 3-05.301 in the cyber domain via a theory-based content analysis. FM 3- 05.301 was chosen because it presents a complete method of the TAA process. The findings were tested against the results of the questionnaire and new scientific research in the field of psychology. The cyber domain was found to be “fast and vast”, volatile and uncontrollable. Although governed by laws to some extent, the cyber domain is unpredictable by nature and not controllable to reasonable amount. The anonymity and lack of verification often present in the digital channels mean that anyone can have an opinion, and any message sent may change or even be counterproductive to the original purpose. The TAA method of the FM 3-05.301 is applicable in the cyber domain, although some parts of the method are outdated and thus suggested to be updated if used in that environment. The target audience categories of step two of the process were replaced by new groups that exist in the digital environment. The accessibility assessment (step eight) was also redefined, as in the digital media the mere existence of a written text is typically not enough to convey the intended message to the target audience. The scientific studies made in computer sciences and both in psychology and sociology about the behavior of people in social media (and overall in cyber domain) call for a more extensive remake of the TAA process. This falls, however, out of the scope of this work. It is thus suggested that further research should be carried out in search of computer-assisted methods and a more thorough TAA process, utilizing the latest discoveries of human behavior. ---------------------------------------------------------------------------------------------------------------------------------- Tämän opinnäytetyön tavoitteena on löytää tehokas tapa kohdeyleisöanalyysin tekemiseksi kybertoimintaympäristössä. Työssä keskitytään kahteen ilmiöön: kybertoimintaympäristön luonteeseen ja kohdeyleisöanalyysin metodiin. Kybertoimintaympäristön osalta tavoitteena on löytää sen digitaalisesta ja ajallisesta luonteesta juontuvat mahdollisuudet, rajoitteet ja sudenkuopat. Tämä on se ympäristö jossa kohdeyleisöanalyysiä tarkastellaan tässä työssä. Koska kohdeyleisöanalyysi kuuluu olennaisena osana jokaiseen psykologiseen operaatioon ja on onnistumisen kannalta kriittinen tekijä, käytettävän metodin tulee pitää sisällään kaikki oikean kohdeyleisön valinnan kannalta merkittävät osa-alueet. Tutkimuksen ensimmäisessä vaiheessa lähetettiin avoin kysely informaatiosodankäynnin ammattilaisille Suomessa ja ulkomailla. Koska kyselyn tulokset eivät olleet riittäviä johtopäätösten tekemiseksi, tutkimusta jatkettiin tarkastelemalla Yhdysvaltojen armeijan kenttäohjesäännön FM 3-05.301 soveltuvuutta kybertoimintaympäristössä käytettäväksi teorialähtöisen sisällönanalyysin avulla. FM 3-05.301 valittiin koska se sisältää kokonaisvaltaisen kohdeyleisöanalyysiprosessin. Havaintoja verrattiin kyselytutkimuksen tuloksiin ja psykologian uusiin tutkimuksiin. Kybertoimintaympäristö on tulosten perusteella nopea ja valtava, jatkuvasti muuttuva ja kontrolloimaton. Vaikkakin lait hallitsevat kybertoimintaympäristöä jossakin määrin, on se silti luonteeltaan ennakoimaton eikä sitä voida luotettavasti hallita. Digitaalisilla kanavilla usein läsnäoleva nimettömyys ja tiedon tarkastamisen mahdottomuus tarkoittavat että kenellä tahansa voi olla mielipide asioista, ja mikä tahansa viesti voi muuttua, jopa alkuperäiseen tarkoitukseen nähden vastakkaiseksi. FM 3-05.301:n metodi toimii kybertoimintaympäristössä, vaikkakin jotkin osa-alueet ovat vanhentuneita ja siksi ne esitetään päivitettäväksi mikäli metodia käytetään kyseisessä ympäristössä. Kohdan kaksi kohdeyleisökategoriat korvattiin uusilla, digitaalisessa ympäristössä esiintyvillä ryhmillä. Lähestyttävyyden arviointi (kohta 8) muotoiltiin myös uudestaan, koska digitaalisessa mediassa pelkkä tekstin läsnäolo ei sellaisenaan tyypillisesti vielä riitä halutun viestin välittämiseen kohdeyleisölle. Tietotekniikan edistyminen ja psykologian sekä sosiologian aloilla tehty tieteellinen tutkimus ihmisten käyttäytymisestä sosiaalisessa mediassa (ja yleensä kybertoimintaympäristössä) mahdollistavat koko kohdeyleisöanalyysiprosessin uudelleenrakentamisen. Tässä työssä sitä kuitenkaan ei voida tehdä. Siksi esitetäänkin että lisätutkimusta tulisi tehdä sekä tietokoneavusteisten prosessien että vielä syvällisempien kohdeyleisöanalyysien osalta, käyttäen hyväksi viimeisimpiä ihmisen käyttäytymiseen liittyviä tutkimustuloksia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monitoring and control of hydrogen sulfide (H2S) level is of great interest for a wide range of application areas including food quality control, defense and antiterrorist applications and air quality monitoring e.g. in mines. H2S is a very poisonous and flammable gas. Exposure to low concentrations of H2S can result in eye irritation, a sore throat and cough, shortness of breath, and fluid retention in the lungs. These symptoms usually disappear in a few weeks. Long-term, low-level exposure may result in fatigue, loss of appetite, headache, irritability, poor memory, and dizziness. Higher concentrations of 700 - 800 ppm tend to be fatal. H2S has a characteristic smell of rotten egg. However, because of temporary paralysis of olfactory nerves, the smelling capability at concentrations higher than 100 ppm is severely compromised. In addition, volatile H2S is one of the main products during the spoilage of poultry meat in anaerobic conditions. Currently, no commercial H2S sensor is available which can operate under anaerobic conditions and can be easily integrated in the food packaging. This thesis presents a step-wise progress in the development of printed H2S gas sensors. Efforts were made in the formulation, characterization and optimization of functional printable inks and coating pastes based on composites of a polymer and a metal salt as well as a composite of a metal salt and an organic acid. Different processing techniques including inkjet printing, flexographic printing, screen printing and spray coating were utilized in the fabrication of H2S sensors. The dispersions were characterized by measuring turbidity, surface tension, viscosity and particle size. The sensing films were characterized using X-ray photoelectron spectroscopy, X-ray diffraction, atomic force microscopy and an electrical multimeter. Thin and thick printed or coated films were developed for gas sensing applications with the aim of monitoring the H2S concentrations in real life applications. Initially, a H2S gas sensor based on a composite of polyaniline and metal salt was developed. Both aqueous and solvent-based dispersions were developed and characterized. These dispersions were then utilized in the fabrication of roll-to-roll printed H2S gas sensors. However, the humidity background, long term instability and comparatively lower detection limit made these sensors less favourable for real practical applications. To overcome these problems, copper acetate based sensors were developed for H2S gas sensing. Stable inks with excellent printability were developed by tuning the surface tension, viscosity and particle size. This enabled the formation of inkjet-printed high quality copper acetate films with excellent sensitivity towards H2S. Furthermore, these sensors showed negligible humidity effects and improved selectivity, response time, lower limit of detection and coefficient of variation. The lower limit of detection of copper acetate based sensors was further improved to sub-ppm level by incorporation of catalytic gold nano-particles and subsequent plasma treatment of the sensing film. These sensors were further integrated in an inexpensive wirelessly readable RLC-circuit (where R is resistor, L is inductor and C is capacitor). The performance of these sensors towards biogenic H2S produced during the spoilage of poultry meat in the modified atmosphere package was also demonstrated in this thesis. This serves as a proof of concept that these sensors can be utilized in real life applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decades, the chemical synthesis of short oligonucleotides has become an important aspect of study due to the discovery of new functions for nucleic acids such as antisense oligonucleotides (ASOs), aptamers, DNAzymes, microRNA (miRNA) and small interfering RNA (siRNA). The applications in modern therapies and fundamental medicine on the treatment of different cancer diseases, viral infections and genetic disorders has established the necessity to develop scalable methods for their cheaper and easier industrial manufacture. While small scale solid-phase oligonucleotide synthesis is the method of choice in the field, various challenges still remain associated with the production of short DNA and RNA-oligomers in very large quantities. On the other hand, solution phase synthesis of oligonucleotides offers a more predictable scaling-up of the synthesis and is amenable to standard industrial manufacture techniques. In the present thesis, various protocols for the synthesis of short DNA and RNA oligomers have been studied on a peracetylated and methylated β-cyclodextrin, and also on a pentaerythritol-derived support. On using the peracetylated and methylated β-cyclodextrin soluble supports, the coupling cycle was simplified by replacement of the typical 5′-O-(4,4′-dimethoxytrityl) protecting group with an acid-labile acetal-protected 5′-O-(1-methoxy-1-methylethyl) group, which upon acid-catalyzed methanolysis released easily removable volatile products. For this reason monomeric building blocks 5′-O-(1-methoxy-1-methylethyl) 3′-(2-cyano-ethyl-N,N-diisopropylphosphoramidite) were synthesized. Alternatively, on using the precipitative pentaerythritol support, novel 2´-O-(2-cyanoethyl)-5´-O-(1-methoxy-1-methylethyl) protected phosphoramidite building blocks for RNA synthesis have been prepared and their applicability by the synthesis of a pentamer was demonstrated. Similarly, a method for the preparation of short RNAs from commercially available 5´-O-(4,4´-dimethoxytrityl)-2´-O-(tert-butyldimethyl-silyl)ribonucleoside 3´-(2-cyanoethyl-N,N-diisopropylphosphoramidite) building blocks has been developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Globalization and interconnectedness in the worldwide sphere have changed the existing and prevailing modus operandi of organizations around the globe and have challenged existing practices along with the business as usual mindset. There are no rules in terms of creating a competitive advantage and positioning within an unstable, constantly changing and volatile globalized business environment. The financial industry, the locomotive or the flagship industry of global economy, especially, within the aftermath of the financial crisis, has reached a certain point trying to recover and redefine its strategic orientation and positioning within the global business arena. Innovation has always been a trend and a buzzword and by many has been considered as the ultimate answer to any kind of problem. The mantra Innovate or Die has been prevailing in any organizational entity in a, sometimes, ruthless endeavour to develop cutting-edge products and services and capture a landmark position in the market. The emerging shift from a closed to an open innovation paradigm has been considered as new operational mechanism within the management and leadership of the company of the future. To that respect, open innovation has been experiencing a tremendous growth research trajectory by putting forward a new way of exchanging and using surplus knowledge in order to sustain innovation within organizations and in the level of industry. In the abovementioned reality, there seems to be something missing: the human element. This research, by going beyond the traditional narratives for open innovation, aims at making an innovative theoretical and managerial contribution developed and grounded on the on-going discussion regarding the individual and organizational barriers to open innovation within the financial industry. By functioning across disciplines and researching out to primary data, it debunks the myth that open innovation is solely a knowledge inflow and outflow mechanism and sheds light to the understanding on the why and the how organizational open innovation works by enlightening the broader dynamics and underlying principles of this fascinating paradigm. Little attention has been given to the role of the human element, the foundational pre-requisite of trust encapsulated within the precise and fundamental nature of organizing for open innovation, the organizational capabilities, the individual profiles of open innovation leaders, the definition of open innovation in the realms of the financial industry, the strategic intent of the financial industry and the need for nurturing a societal impact for human development. To that respect, this research introduces the trust-embedded approach to open innovation as a new insightful way of organizing for open innovation. It unveils the peculiarities of the corporate and individual spheres that act as a catalyst towards the creation of productive open innovation activities. The incentive of this research captures the fundamental question revolving around the need for financial institutions to recognise the importance for organizing for open innovation. The overarching question is why and how to create a corporate culture of openness in the financial industry, an organizational environment that can help open innovation excel. This research shares novel and cutting edge outcomes and propositions both under the prism of theory and practice. The trust-embedded open innovation paradigm captures the norms and narratives around the way of leading open innovation within the 21st century by cultivating a human-centricity mindset that leads to the creation of human organizations, leaving behind the dehumanization mindset currently prevailing within the financial industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cleavages have been central in understanding the relationship between political parties and voters but the credibility of cleavage approach has been increasingly debated. This is because of decreasing party loyalty, fewer ideological differences between the parties and general social structural change amongst other factors. By definition, cleavages arise when social structural groups recognize their clashing interests, which are reflected in common values and attitudes, and vote for parties that are dedicated to defend the interests of the groups concerned. This study assesses relevance of cleavage approach in the Finnish context. The research problem in this study is “what kind of a cleavage structure exists in Finland at the beginning of the 21st century? Finland represents a case that has traditionally been characterized by a strong and diverse cleavage structure, notable ideological fragmentation in the electorate and an ideologically diverse party system. Nevertheless, the picture of the party-voter ties in Finland still remains incomplete with regard to a thorough analysis of cleavages. In addition, despite the vast amount of literature on cleavages in political science, studies that thoroughly analyze national cleavage structures by assessing the relationship between social structural position, values and attitudes and party choice have been rare. The research questions are approached by deploying statistical analyses, and using Finnish National Election Studies from 2003, 2007 and 2011as data. In this study, seven different social structural cleavage bases are analyzed: native language, type of residential area, occupational class, education, denomination, gender and age cohorts. Four different value/attitudinal dimensions were identified in this study: economic right and authority, regional and socioeconomic equality, sociocultural and European Union dimensions. This study shows that despite the weak overall effect of social structural positions on values and attitudes, a few rather strong connections between them were identified. The overall impact of social structural position and values and attitudes on party choice varies significantly between parties. Cleavages still exist in Finland and the cleavage structure partly reflects the old basis in the Finnish party system. The cleavage that is based on the type of residential area and reflected in regional and socioeconomic equality dimensions concerns primarily the voters of the Centre Party and the Coalition Party. The linguistic cleavage concerns mostly the voters of the Swedish People’s Party. The classic class cleavage reflected in the regional and socioeconomic equality dimension concerns in turn first and foremost the blue-collar voters of the Left Alliance and the Social Democratic Party, the agricultural entrepreneur voters of the Centre Party and higher professional and manager voters of the Coalition Party. The conflict with the most potential as a cleavage is the one based on social status (occupational class and education) and it is reflected in sociocultural and EU dimensions. It sets the voters of the True Finns against the voters of the Green League and the Coalition Party. The study underlines the challenges the old parties have met after the volatile election in 2011, which shook the cleavage structure. It also describes the complexity involved in the Finnish conflict structure and the multidimensionality in the electoral competition between the parties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Haihtuvat orgaaniset yhdisteet (eng. volatile organic compound, VOC) ovat yksi yleisimmistä ja laajimmalle levinneistä ympäristökontaminaattiryhmistä. VOC- yhdisteryhmän yhdisteet ovat määritelmän mukaisesti haihtuvia, molekyylimassaltaan pieniä (16-250 Da) yhdisteitä, joista suurin osa on joko haitallisia tai myrkyllisiä. VOC- yhdisteet pääasiallisesti emittoituvat ympäristöön ihmisen toiminnasta johtuen (teollisuus, autot, maatalous) ja päätyvät luonnossa vesistöihin ja maaperään. Ihmisille haitallisten ominaisuuksien lisäksi, VOC-yhdisteet vaikuttavat esimerkiksi ilmaston lämpenemiseen ja savusumujen syntyyn. Edellä mainittujen ominaisuuksien vuoksi on tärkeää analysoida VOC-yhdisteiden pitoisuuksia. Haihtuvien orgaanisten yhdisteiden ryhmän laajuus, ja fysikaalisten sekä kemiallisten ominaisuuksien (poolisuus, höyrynpaine, vesiliukoisuus) erot asettavat haastetta niiden analysointiin. Yleisimmin VOC-yhdisteitä analysoidaan kaasukromatografia- massaspektrometrin avulla. Pro gradu -tutkielman kirjallisessa osassa käydään läpi VOC-yhdisteiden analytiikassa käytettyjä erilaisia GC-MS-laitekokonaisuuksia ja niiden ominaisuuksia. Lisäksi keskitytään VOC-yhdisteiden erilaisiin näytteenkäsittelymenetelmiin vesi-, maa- ja sedimettinäytteissä. Kokeellisessa osassa analysoitiin kuutta kynureniinipolun metaboliittia solunäytteistä. Kynureniinipolku on nisäkkäillä tärkein tryptofaanin katabolinen polku. Kynureniinipolku aktivoituu entsymaattisesti esimerkiksi tulehdusten, hermostoa rappeuttavien prosessien ja immuunivasteen aikana. Kynureniinipolun yhdisteiden uskotaan lisäävän solun toksisuutta, mutta parantavan sen kykyä lisääntyä ja vähentävän solukuolleisuutta. Esimerkiksi 3-hydroksikynureniinin lisääntynyt määrä on yhdistetty hermostoperäisiin sairauksiin, kuten Huntingtonin- ja Parkinsonin tautiin. Kokeellisessa osassa luotiin yhdistespesifinen MRM-menetelmä, ultra korkean erotuskyvyn nestekromatografi-sähkösumutusionisaatio- kolmoiskvadrupolimassaspektrometrille. Luodulla ja optimoidulla menetelmällä kvantitoitiin soluviljelmänäytteistä samanaikaisesti L-kynureniini-, kynureniinihappo-, 3-hydroksikynureniini-, antraniilihappo-, 3-hydroksiantraniilihappo-, sekä kinoliinihappo-pitoisuudet sisäisen- ja ulkoisen standardin menetelmällä. Solunäytteiden päämetaboliiteiksi havaittiin kynureniini, kunyreniinihappo, sekä antraniilihappo. Ainoastaan 3-hydroksikynureniinihappoa ja kinoliinihappoa ei havaittu yhdestäkään näytteestä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää esiintyykö Suomen osakemarkkinoilla alhaisen volatiliteetin anomaliaa. Tutkielman tavoitteeseen vastataan työn empiirisessä osassa analysoimalla Suomen osakemarkkinoilla listattujen osakkeiden tuottoaikasarjoja. Tutkielmassa tarkastellaan myös finanssikriisin vaikutusta anomalian ilmenemiseen. Tutkimus sijoittuu aikavälille tammikuusta 2001 tammikuuhun 2015. Tutkielmassa muodostetaan portfolioita osakkeiden historiallisen volatiliteetin mukaan. Näiden portfolioiden menestymistä suhteessa markkinoihin arvioidaan absoluuttisten tuottojen, Sharpen luvun sekä Jensenin alfan avulla. Markkinaindekseinä käytetään OMXH CAP -indeksiä sekä tutkimusaineiston pohjalta muodostettua markkinaportfoliota. Kaikkein parhaimman absoluuttisen tuoton on saanut vuodesta 2001 vuoteen 2015 sijoittamalla keskiverron volatiliteetin osakkeisiin. Parhaan riskikorjatun tuoton on kuitenkin saavuttanut sijoittamalla alhaisen volatiliteetin osakkeisiin. Tutkielmassa löydetään todisteita alhaisen volatiliteetin anomalian esiintymisestä Suomen osakemarkkinoilla koko tutkimusaineisto huomioon ottaen. Tutkielman ehkä mielenkiintoisin löydös on kuitenkin huomio alhaisen volatiliteetin anomalian häviämisestä Suomen osakemarkkinoilta finanssikriisin jälkeen. Ennen finanssikriisiä esiintynyt erittäin vahva alhaisen volatiliteetin osakkeiden ylisuoriutuminen hävisi täysin finanssikriisin jälkeen. Toisin sanoen riskin ja tuoton suhde on kääntynyt päälaelleen finanssikriisin jälkeen, eikä alhaisen volatiliteetin anomaliaa voida enää sanoa esiintyvän.