33 resultados para Random real
Resumo:
This paper examines the relationships between uncertainty and the perceived usefulness of traditional annual budgets versus flexible budgets in 95 Swedish companies. We form hypotheses that the perceived usefulness of the annual budgets as well as the attitudes to more flexible budget alternatives are influenced by the uncertainty that the companies face. Our study distinguishes between two separate kinds of uncertainty: exogenous stochastic uncertainty (deriving from the firm’s environment) and endogenous deterministic uncertainty (caused by the strategic choices made by the firm itself). Based on a structural equations modelling analysis of data from a mail survey we found that the more accentuated exogenous uncertainty a company faces, the more accentuated is the expected trend towards flexibility in the budget system, and vice versa; the more endogenous uncertainty they face, the more negative are their attitudes towards budget flexibility. We also found that these relationships were not present with regard to the attitudes towards the usefulness of the annual budget. Noteworthy is, however, that there was a significant negative relationship between the perceived usefulness of the annual budget and budget flexibility. Thus, our results seem to indicate that the degree of flexibility in the budget system is influenced by both general attitudes towards the usefulness of traditional budgets and by the actual degree of exogenous uncertainty a company faces and by the strategy that it executes.
Resumo:
This study develops a real options approach for analyzing the optimal risk adoption policy in an environment where the adoption means a switch from one stochastic flow representation into another. We establish that increased volatility needs not decelerate investment, as predicted by the standard literature on real options, once the underlying volatility of the state is made endogenous. We prove that for a decision maker with a convex (concave) objective function, increased post-adoption volatility increases (decreases) the expected cumulative present value of the post-adoption profit flow, which consequently decreases (increases) the option value of waiting and, therefore, accelerates (decelerates) current investment.
Resumo:
The purpose of this paper is to test for the effect of uncertainty in a model of real estate investment in Finland during the hihhly cyclical period of 1975 to 1998. We use two alternative measures of uncertainty. The first measure is the volatility of stock market returns and the second measure is the heterogeneity in the answers of the quarterly business survey of the Confederation of Finnish Industry and Employers. The econometric analysis is based on the autoregressive distributed lag (ADL) model and the paper applies a 'general-to-specific' modelling approach. We find that the measure of heterogeneity is significant in the model, but the volatility of stock market returns is not. The empirical results give some evidence of an uncertainty-induced threshold slowing down real estate investment in Finland.
Resumo:
Irritable bowel syndrome (IBS) is a common multifactorial functional intestinal disorder, the pathogenesis of which is not completely understood. Increasing scientific evidence suggests that microbes are involved in the onset and maintenance of IBS symptoms. The microbiota of the human gastrointestinal (GI) tract constitutes a massive and complex ecosystem consisting mainly of obligate anaerobic microorganisms making the use of culture-based methods demanding and prone to misinterpretation. To overcome these drawbacks, an extensive panel of species- and group-specific assays for an accurate quantification of bacteria from fecal samples with real-time PCR was developed, optimized, and validated. As a result, the target bacteria were detectable at a minimum concentration range of approximately 10 000 bacterial genomes per gram of fecal sample, which corresponds to the sensitivity to detect 0.000001% subpopulations of the total fecal microbiota. The real-time PCR panel covering both commensal and pathogenic microorganisms was assessed to compare the intestinal microbiota of patients suffering from IBS with a healthy control group devoid of GI symptoms. Both the IBS and control groups showed considerable individual variation in gut microbiota composition. Sorting of the IBS patients according to the symptom subtypes (diarrhea, constipation, and alternating predominant type) revealed that lower amounts of Lactobacillus spp. were present in the samples of diarrhea predominant IBS patients, whereas constipation predominant IBS patients carried increased amounts of Veillonella spp. In the screening of intestinal pathogens, 17% of IBS samples tested positive for Staphylococcus aureus, whereas no positive cases were discovered among healthy controls. Furthermore, the methodology was applied to monitor the effects of a multispecies probiotic supplementation on GI microbiota of IBS sufferers. In the placebo-controlled double-blind probiotic intervention trial of IBS patients, each supplemented probiotic strain was detected in fecal samples. Intestinal microbiota remained stable during the trial, except for Bifidobacterium spp., which increased in the placebo group and decreased in the probiotic group. The combination of assays developed and applied in this thesis has an overall coverage of 300-400 known bacterial species, along with the number of yet unknown phylotypes. Hence, it provides good means for studying the intestinal microbiota, irrespective of the intestinal condition and health status. In particular, it allows screening and identification of microbes putatively associated with IBS. The alterations in the gut microbiota discovered here support the hypothesis that microbes are likely to contribute to the pathophysiology of IBS. The central question is whether the microbiota changes described represent the cause for, rather than the effect of, disturbed gut physiology. Therefore, more studies are needed to determine the role and importance of individual microbial species or groups in IBS. In addition, it is essential that the microbial alterations observed in this study will be confirmed using a larger set of IBS samples of different subtypes, preferably from various geographical locations.
Resumo:
Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
Single molecule force clamp experiments are widely used to investigate how enzymes, molecular motors, and other molecular mechanisms work. We developed a dual-trap optical tweezers instrument with real-time (200 kHz update rate) force clamp control that can exert 0–100 pN forces on trapped beads. A model for force clamp experiments in the dumbbell-geometry is presented. We observe good agreement between predicted and observed power spectra of bead position and force fluctuations. The model can be used to predict and optimize the dynamics of real-time force clamp optical tweezers instruments. The results from a proof-of-principle experiment in which lambda exonuclease converts a double-stranded DNA tether, held at constant tension, into its single-stranded form, show that the developed instrument is suitable for experiments in single molecule biology.