23 resultados para finite-time attractiveness in probability
em Helda - Digital Repository of University of Helsinki
Resumo:
Real-time scheduling algorithms, such as Rate Monotonic and Earliest Deadline First, guarantee that calculations are performed within a pre-defined time. As many real-time systems operate on limited battery power, these algorithms have been enhanced with power-aware properties. In this thesis, 13 power-aware real-time scheduling algorithms for processor, device and system-level use are explored.
Resumo:
This study examined the effects of the Greeks of the options and the trading results of delta hedging strategies, with three different time units or option-pricing models. These time units were calendar time, trading time and continuous time using discrete approximation (CTDA) time. The CTDA time model is a pricing model, that among others accounts for intraday and weekend, patterns in volatility. For the CTDA time model some additional theta measures, which were believed to be usable in trading, were developed. The study appears to verify that there were differences in the Greeks with different time units. It also revealed that these differences influence the delta hedging of options or portfolios. Although it is difficult to say anything about which is the most usable of the different time models, as this much depends on the traders view of the passing of time, different market conditions and different portfolios, the CTDA time model can be viewed as an attractive alternative.
Resumo:
This paper focuses on the time dimension in consumers’ image construction processes. Two new concepts are introduced to cover past consumer experiences about the company – image heritage, and the present image construction process - image-in-use. Image heritage and image-in-use captures the dynamic, relational, social, and contextual features of corporate image construction processes. Qualitative data from a retailing context were collected and analysed following a grounded theory approach. The study demonstrates that consumers’ corporate images have long roots in past experiences. Understanding consumers’ image heritage provides opportunities for understanding how consumers might interpret management initiatives and branding activities in the present.
Resumo:
84 s.
Resumo:
This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars
Resumo:
In this study I look at what people want to express when they talk about time in Russian and Finnish, and why they use the means they use. The material consists of expressions of time: 1087 from Russian and 1141 from Finnish. They have been collected from dictionaries, usage guides, corpora, and the Internet. An expression means here an idiomatic set of words in a preset form, a collocation or construction. They are studied as lexical entities, without a context, and analysed and categorized according to various features. The theoretical background for the study includes two completely different approaches. Functional Syntax is used in order to find out what general meanings the speaker wishes to convey when talking about time and how these meanings are expressed in specific languages. Conceptual metaphor theory is used for explaining why the expressions are as they are, i.e. what kind of conceptual metaphors (transfers from one conceptual domain to another) they include. The study has resulted in a grammatically glossed list of time expressions in Russian and Finnish, a list of 56 general meanings involved in these time expressions and an account of the means (constructions) that these languages have for expressing the general meanings defined. It also includes an analysis of conceptual metaphors behind the expressions. The general meanings involved turned out to revolve around expressing duration, point in time, period of time, frequency, sequence, passing of time, suitable time and the right time, life as time, limitedness of time, and some other notions having less obvious semantic relations to the others. Conceptual metaphor analysis of the material has shown that time is conceptualized in Russian and Finnish according to the metaphors Time Is Space (Time Is Container, Time Has Direction, Time Is Cycle, and the Time Line Metaphor), Time Is Resource (and its submapping Time Is Substance), Time Is Actor; and some characteristics are added to these conceptualizations with the help of the secondary metaphors Time Is Nature and Time Is Life. The limits between different conceptual metaphors and the connections these metaphors have with one another are looked at with the help of the theory of conceptual integration (the blending theory) and its schemas. The results of the study show that although Russian and Finnish are typologically different, they are very similar both in the needs of expression their speakers have concerning time, and in the conceptualizations behind expressing time. This study introduces both theoretical and methodological novelties in the nature of material used, in developing empirical methodology for conceptual metaphor studies, in the exactness of defining the limits of different conceptual metaphors, and in seeking unity among the different facets of time. Keywords: time, metaphor, time expression, idiom, conceptual metaphor theory, functional syntax, blending theory
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
In northern latitudes, temperature is the key factor driving the temporal scales of biological activity, namely the length of the growing season and the seasonal efficiency of photosynthesis. The formation of atmospheric concentrations of biogenic volatile organic compounds (BVOCs) are linked to the intensity of biological activity. However, interdisciplinary knowledge of the role of temperature in the biological processes related to the annual cycle and photosynthesis and atmospheric chemistry is not fully understood. The aim of this study was to improve understanding of the role of temperature in these three interlinked areas: 1) onset of growing season, 2) photosynthetic efficiency and 3) BVOC air concentrations in a boreal forest. The results present a cross-section of the role of temperature on different spatial (southern northern boreal), structural (tree forest stand - forest) and temporal (day-season- year) scales. The fundamental status of the Thermal Time model in predicting the onset of spring recovery was confirmed. However, it was recommended that sequential models would be more appropriate tools when the onset of the growing season is estimated under a warmer climate. A similar type of relationship between photosynthetic efficiency and temperature history was found in both southern and northern boreal forest stands. This result draws attention to the critical question of the seasonal efficiency of coniferous species to emit organic compounds under a warmer climate. New knowledge about the temperature dependence of the concentrations of biogenic volatile organic compounds in a boreal forest stand was obtained. The seasonal progress and the inter-correlation of BVOC concentrations in ambient air indicated a link to biological activity. Temperature was found to be the main driving factor for the concentrations. However, in addition to temperature, other factors may play a significant role here, especially when the peak concentrations are studied. There is strong evidence that the spring recovery and phenological events of many plant species have already advanced in Europe. This study does not fully support this observation. In a boreal forest, changes in the annual cycle, especially the temperature requirement in winter, would have an impact on the atmospheric BVOC composition. According to this study, more joint phenological and BVOC field observations and laboratory experiments are still needed to improve these scenarios.
Resumo:
Large carnivore populations are currently recovering from past extirpation efforts and expanding back into their original habitats. At the same time human activities have resulted in very few wilderness areas left with suitable habitats and size large enough to maintain populations of large carnivores without human contact. Consequently the long-term future of large carnivores depends on their successful integration into landscapes where humans live. Thus, understanding their behaviour and interaction with surrounding habitats is of utmost importance in the development of management strategies for large carnivores. This applies also to brown bears (Ursus arctos) that were almost exterminated from Scandinavia and Finland at the turn of the century, but are now expanding their range with the current population estimates being approximately 2600 bears in Scandinavia and 840 in Finland. This thesis focuses on the large-scale habitat use and population dynamics of brown bears in Scandinavia with the objective to develop modelling approaches that support the management of bear populations. Habitat analysis shows that bear home ranges occur mainly in forested areas with a low level of human influence relative to surrounding areas. Habitat modelling based on these findings allows identification and quantification of the potentially suitable areas for bears in Scandinavia. Additionally, this thesis presents novel improvements to home range estimation that enable realistic estimates of the effective area required for the bears to establish a home range. This is achieved through fitting to the radio-tracking data to establish the amount of temporal autocorrelation and the proportion of time spent in different habitat types. Together these form a basis for the landscape-level management of the expanding population. Successful management of bears requires also assessment of the consequences of harvest on the population viability. An individual-based simulation model, accounting for the sexually selected infanticide, was used to investigate the possibility of increasing the harvest using different hunting strategies, such as trophy harvest of males. The results indicated that the population can sustain twice the current harvest rate. However, harvest should be changed gradually while carefully monitoring the population growth as some effects of increased harvest may manifest themselves only after a time-delay. The results and methodological improvements in this thesis can be applied to the Finnish bear population and to other large carnivores. They provide grounds for the further development of spatially-realistic management-oriented models of brow bear dynamics that can make projections of the future distribution of bears while accounting for the development of human activities.
Resumo:
Eutrophication favours harmful algal blooms worldwide. The blooms cause toxic outbreaks and deteriorated recreational and aesthetic values, causing both economic loss and illness or death of humans and animals. The Baltic Sea is the world s only large brackish water habitat with recurrent blooms of toxic cyanobacteria capable of biological fixation of atmospheric nitrogen gas. Phosphorus is assumed to be the main limiting factor, along with temperature and light, for the growth of these cyanobacteria. This thesis evaluated the role of phosphorus nutrition as a regulating factor for the occurrence of nitrogen-fixing cyanobacteria blooms in the Baltic Sea, utilising experimental laboratory and field studies and surveys on varying spatial scales. Cellular phosphorus sources were found to be able to support substantial growth of the two main bloom forming species Aphanizomenon sp. and Nodularia spumigena. However, N. spumigena growth seemed independent of phosphorus source, whereas, Aphanizomenon sp. grew best in a phosphate enriched environment. Apparent discrepancies with field observations and experiments are explained by the typical seasonal temperature dependent development of Aphanizomenon sp. and N. spumigena biomass allowing the two species to store ambient pre-bloom excess phosphorus in different ways. Field experiments revealed natural cyanobacteria bloom communities to be predominantly phosphorus deficient during blooms. Phosphate additions were found to increase the accumulation of phosphorus relatively most in the planktonic size fraction dominated by the nitrogen-fixing cyanobacteria. Aphanizomenon sp. responded to phosphate additions whereas the phosphorus nutritive status of N. spumigena seemed independent of phosphate addition. The seasonal development of phosphorus deficiency is different for the two species with N. spumigena showing indications of phosphorus deficiency during a longer time period in the open sea. Coastal upwelling introduces phosphorus to the surface layer during nutrient deficient conditions in summer. The species-specific ability of Aphanizomenon sp. and N. spumigena to utilise phosphate enrichment of the surface layer caused by coastal upwelling was clarified. Typical bloom time vertical distributions of biomass maxima were found to render N. spumigena more susceptible to advection by surface currents caused by coastal upwellings. Aphanizomenon sp. populations residing in the seasonal thermocline were observed to be able to utilise the phosphate enrichment and a bloom was produced with a two to three week time lag subsequent to the relaxation of upwelling. Consistent high concentrations of dissolved inorganic phosphorus, caused by persistent internal loading of phosphorus, was found to be the main source of phosphorus for large-scale pelagic blooms. External loads were estimated to contribute with only a fraction of available phosphorus for open sea blooms. Remineralization of organic forms of phosphorus along with vertical mixing to the permanent halocline during winter set the level of available phosphorus for the next growth season. Events such as upwelling are important in replenishing phosphate concentrations during the nutrient deplete growth season. Autecological characteristics of the two main bloom forming species favour Aphanizomenon sp. populations in utilising the abundant excess phosphate concentrations and phosphate pulses mediated through upwelling. Whilst, N. spumigena displays predominant phosphorus limited growth mode and relies on more scarce cellular phosphorus stores and presumably dissolved organic phosphorus compounds for growth. The Baltic Sea is hypothesised to be in an inhibited state of recovery due to the extensive historical external nutrient loading, extensive internal phosphorus loading and the substantial nitrogen load caused by cyanobacteria nitrogen fixation. This state of the sea is characterised as a vicious circle .
Resumo:
The Jorvi Bipolar Study (JoBS) is a collaborative ongoing bipolar research project between the Department of Mental Health and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry, Jorvi Hospital, Helsinki University Central Hospital (HUCH), Espoo, Finland. The JoBS is a prospective, naturalistic cohort study of secondary level care psychiatric out-and inpatients with a new episode of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) bipolar disorder (BD). Altogether, 1630 patients (aged 18-59) years were screened using the Mood Disorder Questionnaire (MDQ) for a possible new episode of DSM-IV BD. 490 patients were interviewed with semi-structured interview [the Structured Clinical Interview for DSM-IV Disorders, research version with Psychotic Screen (SCID-I/P)]. 191 patients with new episode of DSM-IV BD were included in the bipolar cohort study. Psychiatric comorbidity was evaluated using semi-structured interviews. At 6- and 18-month follow-up, the interviews were repeated and life-chart methodology was used to integrate all available information about nature and duration of all different phases. Suicidal behaviour was examined both at intake and follow-up by psychometric scale [Scale for Suicidal Ideation (SSI)], interviewer s questions and medical and psychiatric records. The aim of this thesis was to evaluate prevalence of suicidal behaviour and incidence of suicide attempts, and examine the wide range of risk factors for attempted suicide both, at intake and follow-up, in representative secondary-level sample of psychiatric in- and outpatients with BD. In this study suicidal behaviour was common among psychiatric patients with BD. During the episode when patients were included into cohort study (index episode), 20% of the patients had attempted suicide and 61% had suicidal ideation. Severity of depressive episode and hopelessness were independent risk factors for suicidal ideation, whereas hopelessness, comorbid personality disorder and previous suicide attempt predicted suicide attempts during the index episode. There were no differences in prevalence of suicidal behaviour between bipolar I and II disorder; the risk factors were overlapping but not identical. During the index episode, suicide attempts took place during depressive, mixed and depressive mixed phases. Furthermore, there were marked differences regarding level of suicidal ideation during different phases, with the highest levels during the mixed phases of the illness. Hopelessness was independently associated with suicidal behaviour during the depressive phase. A subjective rating of severity of depression (Beck Depression Inventory) and younger age predicted suicide attempts during mixed phases. During the 18-month follow-up 20% of patients attempted suicide. Previous suicide attempts, hopelessness, depressive phase at index episode and younger age at intake were independent risk factors for suicide attempts during follow-up. Taken altogether, 55% patients attempted suicide before index episode, during index episode or during follow-up. The incidence of suicide attempts was 37-fold during combined mixed and depressive mixed states and 18-fold during major depressive phase as compared with other phases. Prior suicide attempt and time spent in combined mixed phases - mixed and depressive mixed - and depressive phases independently predicted the suicide attempt during follow-up. More than half of the patients have attempted suicide during their lifetime, a finding which highlights the public health importance of suicidal behaviour in bipolar disorder. Clinically, it is crucial to recognize BD and manage the mixed and depressive phases of bipolar patients fast and effectively, as time spent in depressive and mixed phases involves a remarkably high risk of suicide attempts.
Resumo:
For achieving efficient fusion energy production, the plasma-facing wall materials of the fusion reactor should ensure long time operation. In the next step fusion device, ITER, the first wall region facing the highest heat and particle load, i.e. the divertor area, will mainly consist of tiles based on tungsten. During the reactor operation, the tungsten material is slowly but inevitably saturated with tritium. Tritium is the relatively short-lived hydrogen isotope used in the fusion reaction. The amount of tritium retained in the wall materials should be minimized and its recycling back to the plasma must be unrestrained, otherwise it cannot be used for fueling the plasma. A very expensive and thus economically not viable solution is to replace the first walls quite often. A better solution is to heat the walls to temperatures where tritium is released. Unfortunately, the exact mechanisms of hydrogen release in tungsten are not known. In this thesis both experimental and computational methods have been used for studying the release and retention of hydrogen in tungsten. The experimental work consists of hydrogen implantations into pure polycrystalline tungsten, the determination of the hydrogen concentrations using ion beam analyses (IBA) and monitoring the out-diffused hydrogen gas with thermodesorption spectrometry (TDS) as the tungsten samples are heated at elevated temperatures. Combining IBA methods with TDS, the retained amount of hydrogen is obtained as well as the temperatures needed for the hydrogen release. With computational methods the hydrogen-defect interactions and implantation-induced irradiation damage can be examined at the atomic level. The method of multiscale modelling combines the results obtained from computational methodologies applicable at different length and time scales. Electron density functional theory calculations were used for determining the energetics of the elementary processes of hydrogen in tungsten, such as diffusivity and trapping to vacancies and surfaces. Results from the energetics of pure tungsten defects were used in the development of an classical bond-order potential for describing the tungsten defects to be used in molecular dynamics simulations. The developed potential was utilized in determination of the defect clustering and annihilation properties. These results were further employed in binary collision and rate theory calculations to determine the evolution of large defect clusters that trap hydrogen in the course of implantation. The computational results for the defect and trapped hydrogen concentrations were successfully compared with the experimental results. With the aforedescribed multiscale analysis the experimental results within this thesis and found in the literature were explained both quantitatively and qualitatively.
Resumo:
A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.