11 resultados para Decimal multiplication

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The primary aim of the present study was to find an efficient and simple method of vegetative propagation for producing large numbers of hybrid aspen (Populus tremuloides L. x P. tremula Michx.) plants for forest plantations. The key objectives were to investigate the main physiological factors that affect the ability of cuttings to regenerate and to determine whether these factors could be manipulated by different growth conditions. In addition, clonal variation in traits related to propagation success was examined. According to our results, with the stem cutting method, depending on the clone, it is possible to obtain only 1−8 plants from one stock plant per year. With the root cutting method the corresponding values for two-year-old stock plants are 81−207 plants. The difference in number of cuttings between one- and two-year-old stock plants is so pronounced that it is economically feasible to grow stock plants for two years. There is no reason to use much older stock plants as a source of cuttings, as it has been observed that rooting ability diminishes as root diameter increases. Clonal variation is the most important individual factor in propagation of hybrid aspen. The fact that the efficiently sprouted clones also rooted best facilitates the selection of clones for large-scale propagation. In practice, root cuttings taken from all parts of the root system of hybrid aspen were capable of producing new shoots and roots. However, for efficient rooting it is important to use roots smaller than one centimeter in diameter. Both rooting and sprouting, as well as sprouting rate, were increased by high soil temperature; in our studies the highest temperature tested (30ºC) was the best. Light accelerated the sprouting of root cuttings, but they rooted best in dark conditions. Rooting is essential because without roots the sprouted cutting cannot survive long. For aspen the criteria for clone selection are primarily fiber qualities and growth rate, but ability to regenerate efficiently is also essential. For large-scale propagation it is very important to find clones from which many cuttings per stock plant can be obtained. In light of production costs, however, it is even more important that the regeneration ability of the produced cuttings be high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The grotesque in Finnish literature. Four case studies The topic of the dissertation is the grotesque in Finnish literature. The dissertation is twofold. Firstly, it focuses on the genre tradition of the grotesque, especially its other main branch, which has been named, following in Bakhtin s footsteps, subjective ( chamber ) grotesque, to be distinguished from carnivalistic ( public square ) grotesque. Secondly, the dissertation analyses and interprets four fictional literary works within the context of the grotesque genre, constructed on the basis of previous research and literature. These works are the novel Rakastunut rampa (1922) by Joel Lehtonen, the novel Prins Efflam (1953, transl. into Finnish as Kalastajakylän prinssi) by Sally Salminen, the short story Orjien kasvattaja (1965) by Juhani Peltonen, and the novel Veljeni Sebastian (1985) by Annika Idström. What connects these stirring novels, representing early or full modernism, is the supposition that they belong to the tradition of the subjective grotesque, not only due to occasional details, but also in a more comprehensive manner. The premises are that genre is a significant part of the work and that reading a novel in the context of the genre tradition adds something essential to the interpretation of individual texts and reveals meanings that might otherwise go unnoticed. The main characteristic of the grotesque is breaking the norm. This is accomplished through different means: degradation, distortion, inversion, combination, exaggeration and multiplication. The most significant strategy for breaking the norm is incongruence: the grotesque combines conflicting or mutually exclusive categories and elements on different levels. Simultaneously, the grotesque unravels categorisations and questions our way of perceiving the world. The grotesque not only poses a threat to one s identity, but can also pose a threat to the cognitive process. An analysis of the fictional works is presented as case studies of each chosen work as a whole. The analysis is based on the method of close reading, which draws on both classical and postclassical narratology, and the analysis and interpretation are expanded within the genre tradition of the grotesque. The grotesque is also analysed in terms of its relationship to the neighbouring categories and genre traditions, such as the tragic, the sublime, the horror story and the coming-of-age story. This dissertation shows how the grotesque is constructed repeatedly on deviations from the norm as well as on incongruence, also in the works analysed, and how it stratifies in these novels on and between different levels, such as the story, text, narration, composition and the world of the novels. In all the works analysed, the grotesque reduces and subverts. Again and again it reveals different sides of humanity stripped of idealisation and glorification. The dissertation reveals that Finnish literature is not a solitary island, even regarding the grotesque, for it continues and offers variations of the common tradition of grotesque literature, and likewise draws on grotesque visual arts. This dissertation is the first monograph in Finnish literature research focusing on the subjective grotesque.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let X be a topological space and K the real algebra of the reals, the complex numbers, the quaternions, or the octonions. The functions form X to K form an algebra T(X,K) with pointwise addition and multiplication. We study first-order definability of the constant function set N' corresponding to the set of the naturals in certain subalgebras of T(X,K). In the vocabulary the symbols Constant, +, *, 0', and 1' are used, where Constant denotes the predicate defining the constants, and 0' and 1' denote the constant functions with values 0 and 1 respectively. The most important result is the following. Let X be a topological space, K the real algebra of the reals, the compelex numbers, the quaternions, or the octonions, and R a subalgebra of the algebra of all functions from X to K containing all constants. Then N' is definable in , if at least one of the following conditions is true. (1) The algebra R is a subalgebra of the algebra of all continuous functions containing a piecewise open mapping from X to K. (2) The space X is sigma-compact, and R is a subalgebra of the algebra of all continuous functions containing a function whose range contains a nonempty open set of K. (3) The algebra K is the set of reals or the complex numbers, and R contains a piecewise open mapping from X to K and does not contain an everywhere unbounded function. (4) The algebra R contains a piecewise open mapping from X to the set of the reals and function whose range contains a nonempty open subset of K. Furthermore R does not contain an everywhere unbounded function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis focuses on how elevated CO2 and/or O3 affect the below-ground processes in semi-natural vegetation, with an emphasis on greenhouse gases, N cycling and microbial communities. Meadow mesocosms mimicking lowland hay meadows in Jokioinen, SW Finland, were enclosed in open-top chambers and exposed to ambient and elevated levels of O3 (40-50 ppb) and/or CO2 (+100 ppm) for three consecutive growing season, while chamberless plots were used as chamber controls. Chemical and microbiological analyses as well as laboratory incubations of the mesocosm soils under different treatments were used to study the effects of O3 and/or CO2. Artificially constructed mesocosms were also compared with natural meadows with regards to GHG fluxes and soil characteristics. In addition to research conducted at the ecosystem level (i.e. the mesocosm study), soil microbial communities were also examined in a pot experiment with monocultures of individual species. By comparing mesocosms with similar natural plant assemblage, it was possible to demonstrate that artificial mesocosms simulated natural habitats, even though some differences were found in the CH4 oxidation rate, soil mineral N, and total C and N concentrations in the soil. After three growing seasons of fumigations, the fluxes of N2O, CH4, and CO2 were decreased in the NF+O3 treatment, and the soil NH4+-N and mineral N concentrations were lower in the NF+O3 treatment than in the NF control treatment. The mesocosm soil microbial communities were affected negatively by the NF+O3 treatment, as the total, bacterial, actinobacterial, and fungal PLFA biomasses as well as the fungal:bacterial biomass ratio decreased under elevated O3. In the pot survey, O3 decreased the total, bacterial, actinobacterial, and mycorrhizal PLFA biomasses in the bulk soil and affected the microbial community structure in the rhizosphere of L. pratensis, whereas the bulk soil and rhizosphere of the other monoculture, A. capillaris, remained unaffected by O3. Elevated CO2 caused only minor and insignificant changes in the GHG fluxes, N cycling, and the microbial community structure. In the present study, the below-ground processes were modified after three years of moderate O3 enhancement. A tentative conclusion is that a decrease in N availability may have feedback effects on plant growth and competition and affect the N cycling of the whole meadow ecosystem. Ecosystem level changes occur slowly, and multiplication of the responses might be expected in the long run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrogen (N) and phosphorus (P) are essential elements for all living organisms. However, in excess, they contribute to several environmental problems such as aquatic and terrestrial eutrophication. Globally, human action has multiplied the volume of N and P cycling since the onset of industrialization. The multiplication is a result of intensified agriculture, increased energy consumption and population growth. Industrial ecology (IE) is a discipline, in which human interaction with the ecosystems is investigated using a systems analytical approach. The main idea behind IE is that industrial systems resemble ecosystems, and, like them, industrial systems can then be described using material, energy and information flows and stocks. Industrial systems are dependent on the resources provided by the biosphere, and these two cannot be separated from each other. When studying substance flows, the aims of the research from the viewpoint of IE can be, for instance, to elucidate the ways how the cycles of a certain substance could be more closed and how the flows of a certain substance could be decreased per unit of production (= dematerialization). In Finland, N and P are studied widely in different ecosystems and environmental emissions. A holistic picture comparing different societal systems is, however, lacking. In this thesis, flows of N and P were examined in Finland using substance flow analysis (SFA) in the following four subsystems: I) forest industry and use of wood fuels, II) food production and consumption, III) energy, and IV) municipal waste. A detailed analysis at the end of the 1990s was performed. Furthermore, historical development of the N and P flows was investigated in the energy system (III) and the municipal waste system (IV). The main research sources were official statistics, literature, monitoring data, and expert knowledge. The aim was to identify and quantify the main flows of N and P in Finland in the four subsystems studied. Furthermore, the aim was to elucidate whether the nutrient systems are cyclic or linear, and to identify how these systems could be more efficient in the use and cycling of N and P. A final aim was to discuss how this type of an analysis can be used to support decision-making on environmental problems and solutions. Of the four subsystems, the food production and consumption system and the energy system created the largest N flows in Finland. For the creation of P flows, the food production and consumption system (Paper II) was clearly the largest, followed by the forest industry and use of wood fuels and the energy system. The contribution of Finland to N and P flows on a global scale is low, but when compared on a per capita basis, we are one of the largest producers of these flows, with relatively high energy and meat consumption being the main reasons. Analysis revealed the openness of all four systems. The openness is due to the high degree of internationality of the Finnish markets, the large-scale use of synthetic fertilizers and energy resources and the low recycling rate of many waste fractions. Reduction in the use of fuels and synthetic fertilizers, reorganization of the structure of energy production, reduced human intake of nutrients and technological development are crucial in diminishing the N and P flows. To enhance nutrient recycling and replace inorganic fertilizers, recycling of such wastes as wood ash and sludge could be promoted. SFA is not usually sufficiently detailed to allow specific recommendations for decision-making to be made, but it does yield useful information about the relative magnitude of the flows and may reveal unexpected losses. Sustainable development is a widely accepted target for all human action. SFA is one method that can help to analyse how effective different efforts are in leading to a more sustainable society. SFA's strength is that it allows a holistic picture of different natural and societal systems to be drawn. Furthermore, when the environmental impact of a certain flow is known, the method can be used to prioritize environmental policy efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plus-stranded (plus) RNA viruses multiply within a cellular environment as tightly integrated units and rely on the genetic information carried within their genomes for multiplication and, hence, persistence. The minimal genomes of plus RNA viruses are unable to encode the molecular machineries that are required for virus multiplication. This sets requisites for the virus, which must form compatible interactions with host components during multiplication to successfully utilize primary metabolites as building blocks or metabolic energy, and to divert the protein synthesis machinery for production of viral proteins. In fact, the emerging picture of a virus-infected cell displays tight integration with the virus, from simple host and virus protein interactions through to major changes in the physiological state of the host cell. This study set out to develop a method for the identification of host components, mainly host proteins, that interact with proteins of Potato virus A (PVA; Potyvirus) during infection. This goal was approached by developing affinity-tag based methods for the purification of viral proteins complexed with associated host proteins from infected plants. Using this method, host membrane-associated viral ribonucleoprotein (RNP) complexes were obtained, and several host and viral proteins could be identified as components of these complexes. One of the host proteins identified using this strategy was a member of the heat shock protein 70 (HSP70) family, and this protein was chosen for further analysis. To enable the analysis of viral gene expression, a second method was developed based on Agrobacterium-mediated virus genome delivery into plant cells, and detection of virally expressed Renilla luciferase (RLUC) as a quantitative measure of viral gene expression. Using this method, it was observed that down-regulation of HSP70 caused a PVA coat protein (CP)-mediated defect associated with replication. Further experimentation suggested that CP can inhibit viral gene expression and that a distinct translational activity coupled to replication, referred to as replication-associated translation (RAT), exists. Unlike translation of replication-deficient viral RNA, RAT was dependent on HSP70 and its co-chaperone CPIP. HSP70 and CPIP together regulated CP turnover by promoting its modification by ubiquitin. Based on these results, an HSP70 and CPIP-driven mechanism that functions to regulate CP during viral RNA replication and/or translation is proposed, possibly to prevent premature particle assembly caused by CP association with viral RNA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Transition Radiation Tracker (TRT) of the ATLAS experiment at the LHC is part of the Inner Detector. It is designed as a robust and powerful gaseous detector that provides tracking through individual drift-tubes (straws) as well as particle identification via transition radiation (TR) detection. The straw tubes are operated with Xe-CO2-O2 70/27/3, a gas that combines the advantages of efficient TR absorption, a short electron drift time and minimum ageing effects. The modules of the barrel part of the TRT were built in the United States while the end-cap wheels are assembled at two Russian institutes. Acceptance tests of barrel modules and end-cap wheels are performed at CERN before assembly and integration with the Semiconductor Tracker (SCT) and the Pixel Detector. This thesis first describes simulations the TRT straw tube. The argon-based acceptance gas mixture as well as two xenon-based operating gases are examined for its properties. Drift velocities and Townsend coefficients are computed with the help of the program Magboltz and used to study electron drift and multiplication in the straw using the software Garfield. The inclusion of Penning transfers in the avalanche process leads to remarkable agreements with experimental data. A high level of cleanliness in the TRT s acceptance test gas system is indispensable. To monitor gas purity, a small straw tube detector has been constructed and extensively used to study the ageing behaviour of the straw tube in Ar-CO2. A variety of ageing tests are presented and discussed. Acceptance tests for the TRT survey dimensions, wire tension, gas-tightness, high-voltage stability and gas gain uniformity along each individual straw. The thesis gives details on acceptance criteria and measurement methods in the case of the end-cap wheels. Special focus is put on wire tension and straw straightness. The effect of geometrically deformed straws on gas gain and energy resolution is examined in an experimental setup and compared to simulation studies. An overview of the most important results from the end-cap wheels tested up to this point is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Toeplitz operators are among the most important classes of concrete operators with applications to several branches of pure and applied mathematics. This doctoral thesis deals with Toeplitz operators on analytic Bergman, Bloch and Fock spaces. Usually, a Toeplitz operator is a composition of multiplication by a function and a suitable projection. The present work deals with generalizing the notion to the case where the function is replaced by a distributional symbol. Fredholm theory for Toeplitz operators with matrix-valued symbols is also considered. The subject of this thesis belongs to the areas of complex analysis, functional analysis and operator theory. This work contains five research articles. The articles one, three and four deal with finding suitable distributional classes in Bergman, Fock and Bloch spaces, respectively. In each case the symbol class to be considered turns out to be a certain weighted Sobolev-type space of distributions. The Bergman space setting is the most straightforward. When dealing with Fock spaces, some difficulties arise due to unboundedness of the complex plane and the properties of the Gaussian measure in the definition. In the Bloch-type spaces an additional logarithmic weight must be introduced. Sufficient conditions for boundedness and compactness are derived. The article two contains a portion showing that under additional assumptions, the condition for Bergman spaces is also necessary. The fifth article deals with Fredholm theory for Toeplitz operators having matrix-valued symbols. The essential spectra and index theorems are obtained with the help of Hardy space factorization and the Berezin transform, for instance. The article two also has a part dealing with matrix-valued symbols in a non-reflexive Bergman space, in which case a condition on the oscillation of the symbol (a logarithmic VMO-condition) must be added.