14 resultados para Non-linear phenomena
em Helda - Digital Repository of University of Helsinki
Resumo:
Lahopuun määrästä ja sijoittumisesta ollaan kiinnostuneita paitsi elinympäristöjen monimuotoisuuden, myös ilmakehän hiilen varastoinnin kannalta. Tutkimuksen tavoitteena oli kehittää aluepohjainen laserkeilausdataa hyödyntävä malli lahopuukohteiden paikantamiseksi ja lahopuun määrän estimoimiseksi. Samalla tutkittiin mallin selityskyvyn muuttumista mallinnettavan ruudun kokoa suurennettaessa. Tutkimusalue sijaitsi Itä-Suomessa Sonkajärvellä ja koostui pääasiassa nuorista hoidetuista talousmetsistä. Tutkimuksessa käytettiin harvapulssista laserkeilausdataa sekä kaistoittain mitattua maastodataa kuolleesta puuaineksesta. Aineisto jaettiin siten, että neljäsosa datasta oli käytössä mallinnusta varten ja loput varattiin valmiiden mallien testaamiseen. Lahopuun mallintamisessa käytettiin sekä parametrista että ei-parametrista mallinnusmenetelmää. Logistisen regression avulla erikokoisille (0,04, 0,20, 0,32, 0,52 ja 1,00 ha) ruuduille ennustettiin todennäköisyys lahopuun esiintymiselle. Muodostettujen mallien selittävät muuttujat valittiin 80 laserpiirteen ja näiden muunnoksien joukosta. Mallien selittävät muuttujat valittiin kolmessa vaiheessa. Aluksi muuttujia tarkasteltiin visuaalisesti kuvaamalla ne lahopuumäärän suhteen. Ensimmäisessä vaiheessa sopivimmiksi arvioitujen muuttujien selityskykyä testattiin mallinnuksen toisessa vaiheessa yhden muuttujan mallien avulla. Lopullisessa usean muuttujan mallissa selittävien muuttujien kriteerinä oli tilastollinen merkitsevyys 5 % riskitasolla. 0,20 hehtaarin ruutukoolle luotu malli parametrisoitiin muun kokoisille ruuduille. Logistisella regressiolla toteutetun parametrisen mallintamisen lisäksi, 0,04 ja 1,0 hehtaarin ruutukokojen aineistot luokiteltiin ei-parametrisen CART-mallinnuksen (Classification and Regression Trees) avulla. CARTmenetelmällä etsittiin aineistosta vaikeasti havaittavia epälineaarisia riippuvuuksia laserpiirteiden ja lahopuumäärän välillä. CART-luokittelu tehtiin sekä lahopuustoisuuden että lahopuutilavuuden suhteen. CART-luokituksella päästiin logistista regressiota parempiin tuloksiin ruutujen luokituksessa lahopuustoisuuden suhteen. Logistisella mallilla tehty luokitus parani ruutukoon suurentuessa 0,04 ha:sta(kappa 0,19) 0,32 ha:iin asti (kappa 0,38). 0,52 ha:n ruutukoolla luokituksen kappa-arvo kääntyi laskuun (kappa 0,32) ja laski edelleen hehtaarin ruutukokoon saakka (kappa 0,26). CART-luokitus parani ruutukoon kasvaessa. Luokitustulokset olivat logistista mallinnusta parempia sekä 0,04 ha:n (kappa 0,24) että 1,0 ha:n (kappa 0,52) ruutukoolla. CART-malleilla määritettyjen ruutukohtaisten lahopuutilavuuksien suhteellinen RMSE pieneni ruutukoon kasvaessa. 0,04 hehtaarin ruutukoolla koko aineiston lahopuumäärän suhteellinen RMSE oli 197,1 %, kun hehtaarin ruutukoolla vastaava luku oli 120,3 %. Tämän tutkimuksen tulosten perusteella voidaan todeta, että maastossa mitatun lahopuumäärän ja tutkimuksessa käytettyjen laserpiirteiden yhteys on pienellä ruutukoolla hyvin heikko, mutta vahvistuu hieman ruutukoon kasvaessa. Kun mallinnuksessa käytetty ruutukoko kasvaa, pienialaisten lahopuukeskittymien havaitseminen kuitenkin vaikeutuu. Tutkimuksessa kohteen lahopuustoisuus pystyttiin kartoittamaan kohtuullisesti suurella ruutukoolla, mutta pienialaisten kohteiden kartoittaminen ei onnistunut käytetyillä menetelmillä. Pienialaisten kohteiden paikantaminen laserkeilauksen avulla edellyttää jatkotutkimusta erityisesti tiheäpulssisen laserdatan käytöstä lahopuuinventoinneissa.
Resumo:
The effect of temperature on height growth of Scots pine in the northern boreal zone in Lapland was studied in two different time scales. Intra-annual growth was monitored in four stands in up to four growing seasons using an approximately biweekly measurement interval. Inter-annual growth was studied using growth records representing seven stands and five geographical locations. All the stands were growing on a dry to semi-dry heath that is a typical site type for pine stands in Finland. The applied methodology is based on applied time-series analysis and multilevel modelling. Intra-annual elongation of the leader shoot correlated with temperature sum accumulation. Height growth ceased when, on average, 41% of the relative temperature sum of the site was achieved (observed minimum and maximum were 38% and 43%). The relative temperature sum was calculated by dividing the actual temperature sum by the long-term mean of the total annual temperature sum for the site. Our results suggest that annual height growth ceases when a location-specific temperature sum threshold is attained. The positive effect of the mean July temperature of the previous year on annual height increment proved to be very strong at high latitudes. The mean November temperature of the year before the previous had a statistically significantly effect on height increment in the three northernmost stands. The effect of mean monthly precipitation on annual height growth was statistically insignificant. There was a non-linear dependence between length and needle density of annual shoots. Exceptionally low height growth results in high needle-density, but the effect is weaker in years of average or good height growth. Radial growth and next year s height growth are both largely controlled by current July temperature. Nevertheless, their growth variation in terms of minimum and maximum is not necessarily strongly correlated. This is partly because height growth is more sensitive to changes in temperature. In addition, the actual effective temperature period is not exactly the same for these two growth components. Yet, there is a long-term balance that was also statistically distinguishable; radial growth correlated significantly with height growth with a lag of 2 years. Temperature periods shorter than a month are more effective variables than mean monthly values, but the improvement is on the scale of modest to good when applying Julian days or growing-degree-days as pointers.
Resumo:
In this thesis, the kinetics of several alkyl, halogenated alkyl, and alkenyl free radical reactions with NO2, O2, Cl2, and HCl reactants were studied over a wide temperature range in time resolved conditions. Laser photolysis photoionisation mass spectrometer coupled to a flow reactor was the experimental method employed and this thesis present the first measurements performed with the experimental system constructed. During this thesis a great amount of work was devoted to the designing, building, testing, and improving the experimental apparatus. Carbon-centred free radicals were generated by the pulsed 193 or 248 nm photolysis of suitable precursors along the tubular reactor. The kinetics was studied under pseudo-first-order conditions using either He or N2 buffer gas. The temperature and pressure ranges employed were between 190 and 500 K, and 0.5 45 torr, respectively. The possible role of heterogeneous wall reactions was investigated employing reactor tubes with different sizes, i.e. to significantly vary the surface to volume ratio. In this thesis, significant new contributions to the kinetics of carbon-centred free radical reactions with nitrogen dioxide were obtained. Altogether eight substituted alkyl (CH2Cl, CHCl2, CCl3, CH2I, CH2Br, CHBr2, CHBrCl, and CHBrCH3) and two alkenyl (C2H3, C3H3) free radical reactions with NO2 were investigated as a function of temperature. The bimolecular rate coefficients of all these reactions were observed to possess negative temperature dependencies, while pressure dependencies were not noticed for any of these reactions. Halogen substitution was observed to moderately reduce the reactivity of substituted alkyl radicals in the reaction with NO2, while the resonance stabilisation of the alkenyl radical lowers its reactivity with respect to NO2 only slightly. Two reactions relevant to atmospheric chemistry, CH2Br + O2 and CH2I + O2, were also investigated. It was noticed that while CH2Br + O2 reaction shows pronounced pressure dependence, characteristic of peroxy radical formation, no such dependence was observed for the CH2I + O2 reaction. Observed primary products of the CH2I + O2 reaction were the I-atom and the IO radical. Kinetics of CH3 + HCl, CD3 + HCl, CH3 + DCl, and CD3 + DCl reactions were also studied. While all these reactions possess positive activation energies, in contrast to the other systems investigated in this thesis, the CH3 + HCl and CD3 + HCl reactions show a non-linear temperature dependency on the Arrhenius plot. The reactivity of substituted methyl radicals toward NO2 was observed to increase with decreasing electron affinity of the radical. The same trend was observed for the reactions of substituted methyl radicals with Cl2. It is proposed that interactions of frontier orbitals are responsible to these observations and Frontier Orbital Theory could be used to explain the observed reactivity trends of these highly exothermic reactions having reactant-like transition states.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
Thrombophilia (TF) predisposes both to venous and arterial thrombosis at a young age. TF may also impact the thrombosis or stenosis of hemodialysis (HD) vascular access in patients with end-stage renal disease (ESRD). When involved in severe thrombosis TF may associate with inappropriate response to anticoagulation. Lepirudin, a potent direct thrombin inhibitor (DTI), indicated for heparin-induced thrombocytopenia-related thrombosis, could offer a treatment alternative in TF. Monitoring of narrow-ranged lepirudin demands new insights also in laboratory. The above issues constitute the targets in this thesis. We evaluated the prevalence of TF in patients with ESRD and its impact upon thrombosis- or stenosis-free survival of the vascular access. Altogether 237 ESRD patients were prospectively screened for TF and thrombogenic risk factors prior to HD access surgery in 2002-2004 (mean follow-up of 3.6 years). TF was evident in 43 (18%) of the ESRD patients, more often in males (23 vs. 9%, p=0.009). Known gene mutations of FV Leiden and FII G20210A occurred in 4%. Vascular access sufficiently matured in 226 (95%). The 1-year thrombosis- and stenosis-free access survival was 72%. Female gender (hazards ratio, HR, 2.5; 95% CI 1.6-3.9) and TF (HR 1.9, 95% CI 1.1-3.3) were independent risk factors for the shortened thrombosis- and stenosis-free survival. Additionally, TF or thrombogenic background was found in relatively young patients having severe thrombosis either in hepatic veins (Budd-Chiari syndrome, BCS, one patient) or inoperable critical limb ischemia (CLI, six patients). Lepirudin was evaluated in an off-label setting in the severe thrombosis after inefficacious traditional anticoagulation without other treatment options except severe invasive procedures, such as lower extremity amputation. Lepirudin treatments were repeatedly monitored clinically and with laboratory assessments (e.g. activated partial thromboplastin time, APTT). Our preliminary studies with lepirudin in thrombotic calamities appeared safe, and no bleeds occurred. An effective DTI lepirudin calmed thrombosis as all patients gradually recovered. Only one limb amputation was performed 3 years later during the follow-up (mean 4 years). Furthermore, we aimed to overcome the limitations of APTT and confounding effects of warfarin (INR of 1.5-3.9) and lupus anticoagulant (LA). Lepirudin responses were assessed in vitro by five specific laboratory methods. Ecarin chromogenic assay (ECA) or anti-Factor IIa (anti-FIIa) correlated precisely (r=0.99) with each other and with spiked lepirudin in all plasma pools: normal, warfarin, and LA-containing plasma. In contrast, in the presence of warfarin and LA both APTT and prothrombinase-induced clotting time (PiCT®) were limited by non-linear and imprecise dose responses. As a global coagulation test APTT is useful in parallel to the precise chromogenic methods ECA or Anti-FIIa in challenging clinical situations. Lepirudin treatment requires multidisciplinary approach to ensure appropriate patient selection, interpretation of laboratory monitoring, and treatment safety. TF seemed to be associated with complicated thrombotic events, in venous (BCS), arterial (CLI), and vascular access systems. TF screening should be aimed to patients with repeated access complications or prior unprovoked thromboembolic events. Lepirudin inhibits free and clot-bound thrombin which heparin fails to inhibit. Lepirudin seems to offer a potent and safe option for treatment of severe thrombosis. Multi-centered randomized trials are necessary to assess the possible management of complicated thrombotic events with DTIs like lepirudin and seek prevention options against access complications.
Resumo:
Cyclosporine is an immunosuppressant drug with a narrow therapeutic index and large variability in pharmacokinetics. To improve cyclosporine dose individualization in children, we used population pharmacokinetic modeling to study the effects of developmental, clinical, and genetic factors on cyclosporine pharmacokinetics in altogether 176 subjects (age range: 0.36–20.2 years) before and up to 16 years after renal transplantation. Pre-transplantation test doses of cyclosporine were given intravenously (3 mg/kg) and orally (10 mg/kg), on separate occasions, followed by blood sampling for 24 hours (n=175). After transplantation, in a total of 137 patients, cyclosporine concentration was quantified at trough, two hours post-dose, or with dose-interval curves. One-hundred-four of the studied patients were genotyped for 17 putatively functionally significant sequence variations in the ABCB1, SLCO1B1, ABCC2, CYP3A4, CYP3A5, and NR1I2 genes. Pharmacokinetic modeling was performed with the nonlinear mixed effects modeling computer program, NONMEM. A 3-compartment population pharmacokinetic model with first order absorption without lag-time was used to describe the data. The most important covariate affecting systemic clearance and distribution volume was allometrically scaled body weight i.e. body weight**3/4 for clearance and absolute body weight for volume of distribution. The clearance adjusted by absolute body weight declined with age and pre-pubertal children (< 8 years) had an approximately 25% higher clearance/body weight (L/h/kg) than did older children. Adjustment of clearance for allometric body weight removed its relationship to age after the first year of life. This finding is consistent with a gradual reduction in relative liver size towards adult values, and a relatively constant CYP3A content in the liver from about 6–12 months of age to adulthood. The other significant covariates affecting cyclosporine clearance and volume of distribution were hematocrit, plasma cholesterol, and serum creatinine, explaining up to 20%–30% of inter-individual differences before transplantation. After transplantation, their predictive role was smaller, as the variations in hematocrit, plasma cholesterol, and serum creatinine were also smaller. Before transplantation, no clinical or demographic covariates were found to affect oral bioavailability, and no systematic age-related changes in oral bioavailability were observed. After transplantation, older children receiving cyclosporine twice daily as the gelatine capsule microemulsion formulation had an about 1.25–1.3 times higher bioavailability than did the younger children receiving the liquid microemulsion formulation thrice daily. Moreover, cyclosporine oral bioavailability increased over 1.5-fold in the first month after transplantation, returning thereafter gradually to its initial value in 1–1.5 years. The largest cyclosporine doses were administered in the first 3–6 months after transplantation, and thereafter the single doses of cyclosporine were often smaller than 3 mg/kg. Thus, the results suggest that cyclosporine displays dose-dependent, saturable pre-systemic metabolism even at low single doses, whereas complete saturation of CYP3A4 and MDR1 (P-glycoprotein) renders cyclosporine pharmacokinetics dose-linear at higher doses. No significant associations were found between genetic polymorphisms and cyclosporine pharmacokinetics before transplantation in the whole population for which genetic data was available (n=104). However, in children older than eight years (n=22), heterozygous and homozygous carriers of the ABCB1 c.2677T or c.1236T alleles had an about 1.3 times or 1.6 times higher oral bioavailability, respectively, than did non-carriers. After transplantation, none of the ABCB1 SNPs or any other SNPs were found to be associated with cyclosporine clearance or oral bioavailability in the whole population, in the patients older than eight years, or in the patients younger than eight years. In the whole population, in those patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055C haplotype, however, the bioavailability of cyclosporine was about one tenth lower, per allele, than in non-carriers. This effect was significant also in a subgroup of patients older than eight years. Furthermore, in patients carrying the NR1I2 g.-25385C–g.-24381A–g.-205_-200GAGAAG–g.7635G–g.8055T haplotype, the bioavailability was almost one fifth higher, per allele, than in non-carriers. It may be possible to improve individualization of cyclosporine dosing in children by accounting for the effects of developmental factors (body weight, liver size), time after transplantation, and cyclosporine dosing frequency/formulation. Further studies are required on the predictive value of genotyping for individualization of cyclosporine dosing in children.
Resumo:
Inflation is a period of accelerated expansion in the very early universe, which has the appealing aspect that it can create primordial perturbations via quantum fluctuations. These primordial perturbations have been observed in the cosmic microwave background, and these perturbations also function as the seeds of all large-scale structure in the universe. Curvaton models are simple modifications of the standard inflationary paradigm, where inflation is driven by the energy density of the inflaton, but another field, the curvaton, is responsible for producing the primordial perturbations. The curvaton decays after inflation as ended, where the isocurvature perturbations of the curvaton are converted into adiabatic perturbations. Since the curvaton must decay, it must have some interactions. Additionally realistic curvaton models typically have some self-interactions. In this work we consider self-interacting curvaton models, where the self-interaction is a monomial in the potential, suppressed by the Planck scale, and thus the self-interaction is very weak. Nevertheless, since the self-interaction makes the equations of motion non-linear, it can modify the behaviour of the model very drastically. The most intriguing aspect of this behaviour is that the final properties of the perturbations become highly dependent on the initial values. Departures of Gaussian distribution are important observables of the primordial perturbations. Due to the non-linearity of the self-interacting curvaton model and its sensitivity to initial conditions, it can produce significant non-Gaussianity of the primordial perturbations. In this work we investigate the non-Gaussianity produced by the self-interacting curvaton, and demonstrate that the non-Gaussianity parameters do not obey the analytically derived approximate relations often cited in the literature. Furthermore we also consider a self-interacting curvaton with a mass in the TeV-scale. Motivated by realistic particle physics models such as the Minimally Supersymmetric Standard Model, we demonstrate that a curvaton model within the mass range can be responsible for the observed perturbations if it can decay late enough.
Resumo:
In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
This paper investigates the clustering pattern in the Finnish stock market. Using trading volume and time as factors capturing the clustering pattern in the market, the Keim and Madhavan (1996) and the Engle and Russell (1998) model provide the framework for the analysis. The descriptive and the parametric analysis provide evidences that an important determinant of the famous U-shape pattern in the market is the rate of information arrivals as measured by large trading volumes and durations at the market open and close. Precisely, 1) the larger the trading volume, the greater the impact on prices both in the short and the long run, thus prices will differ across quantities. 2) Large trading volume is a non-linear function of price changes in the long run. 3) Arrival times are positively autocorrelated, indicating a clustering pattern and 4) Information arrivals as approximated by durations are negatively related to trading flow.
Resumo:
This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.