46 resultados para Flat industrial modeling
em Helda - Digital Repository of University of Helsinki
Resumo:
Yhteenveto: Kemikaalien teollisesta käsittelystä vesieliöille aiheutuvien riskien arviointi mallin avulla.
Resumo:
Light scattering, or scattering and absorption of electromagnetic waves, is an important tool in all remote-sensing observations. In astronomy, the light scattered or absorbed by a distant object can be the only source of information. In Solar-system studies, the light-scattering methods are employed when interpreting observations of atmosphereless bodies such as asteroids, atmospheres of planets, and cometary or interplanetary dust. Our Earth is constantly monitored from artificial satellites at different wavelengths. With remote sensing of Earth the light-scattering methods are not the only source of information: there is always the possibility to make in situ measurements. The satellite-based remote sensing is, however, superior in the sense of speed and coverage if only the scattered signal can be reliably interpreted. The optical properties of many industrial products play a key role in their quality. Especially for products such as paint and paper, the ability to obscure the background and to reflect light is of utmost importance. High-grade papers are evaluated based on their brightness, opacity, color, and gloss. In product development, there is a need for computer-based simulation methods that could predict the optical properties and, therefore, could be used in optimizing the quality while reducing the material costs. With paper, for instance, pilot experiments with an actual paper machine can be very time- and resource-consuming. The light-scattering methods presented in this thesis solve rigorously the interaction of light and material with wavelength-scale structures. These methods are computationally demanding, thus the speed and accuracy of the methods play a key role. Different implementations of the discrete-dipole approximation are compared in the thesis and the results provide practical guidelines in choosing a suitable code. In addition, a novel method is presented for the numerical computations of orientation-averaged light-scattering properties of a particle, and the method is compared against existing techniques. Simulation of light scattering for various targets and the possible problems arising from the finite size of the model target are discussed in the thesis. Scattering by single particles and small clusters is considered, as well as scattering in particulate media, and scattering in continuous media with porosity or surface roughness. Various techniques for modeling the scattering media are presented and the results are applied to optimizing the structure of paper. However, the same methods can be applied in light-scattering studies of Solar-system regoliths or cometary dust, or in any remote-sensing problem involving light scattering in random media with wavelength-scale structures.
Resumo:
Ei saatavilla
Resumo:
This dissertation examines the short- and long-run impacts of timber prices and other factors affecting NIPF owners' timber harvesting and timber stocking decisions. The utility-based Faustmann model provides testable hypotheses of the exogenous variables retained in the timber supply analysis. The timber stock function, derived from a two-period biomass harvesting model, is estimated using a two-step GMM estimator based on balanced panel data from 1983 to 1991. Timber supply functions are estimated using a Tobit model adjusted for heteroscedasticity and nonnormality of errors based on panel data from 1994 to 1998. Results show that if specification analysis of the Tobit model is ignored, inconsistency and biasedness can have a marked effect on parameter estimates. The empirical results show that owner's age is the single most important factor determining timber stock; timber price is the single most important factor in harvesting decision. The results of the timber supply estimations can be interpreted using utility-based Faustmann model of a forest owner who values a growing timber in situ.
Resumo:
In Finland, suckler cow production is carried out in circumstances characterized by a long winter period and a short grazing period. The traditional winter housing system for suckler cows has been insulated or uninsulated buildings, but there is a demand for developing less expensive housing systems. In addition, more information is needed on new winter feeding strategies, carried out in inexpensive winter facilities with conventional (hay, grass silage, straw) or alternative (treated straw, industrial by-product, whole-crop silage) feeds. The new feeding techniques should not have any detrimental effects on animal welfare in order to be acceptable to both farmers and consumers. Furthermore, no official feeding recommendations for suckler cows are available in Finland and, thus, recommendations for dairy cows have been used. However, this may lead to over- or underfeeding of suckler cows and, finally, to decreased economic output. In Experiment I, second-calf beef-dairy suckler cows were used to compare the effects of diets based on hay (H) or urea-treated straw (US) at two feeding levels (Moderate; M vs. Low; L) on the performance of cows and calves. Live weight (LW) gain during the indoor feeding was lower for cows on level L than on level M. Cows on diet US lost more LW indoors than those on diet H. The cows replenished the LW losses on good pasture. Calf LW gain and cow milk production were unaffected by the treatments. Conception rate was unaffected by the treatments but was only 69%. Urea-treated straw proved to be a suitable winter feed for spring-calving suckler cows. Experiment II studied the effects of feeding accuracy on the performance of first- and second-calf beef-dairy cows and calves. In II-1, the day-to-day variation in the roughage offered ranged up to ± 40%. In II-2, the same variation was used in two-week periods. Variation of the roughages offered had minor effects on cow performance. Reproduction was unaffected by the feeding accuracy. Accurate feeding is not necessary for young beef-dairy crosses, if the total amount of energy offered over a period of a few weeks fulfills the energy requirements. Effects of feeding strategies with alternative feeds on the performance of mature beef-dairy and beef cows and calves were evaluated in Experiment III. Two studies consisted of two feeding strategies (Step-up vs. Flat-rate) and two diets (Control vs. Alternative). There were no differences between treatments in the cow LW, body condition score (BCS), calf pre-weaning LW gain and cow reproduction. A flat-rate strategy can be practised in the nutrition of mature suckler cows. Oat hull based flour-mill by product can partly replace grass silage and straw in the winter diet. Whole-crop barley silage can be offered as a sole feed to suckler cows. Experiment IV evaluated during the winter feeding period the effects of replacing grass silage with whole-crop barley or oat silage on mature beef cow and calf performance. Both whole-crop silages were suitable winter feeds for suckler cows in cold outdoor winter conditions. Experiment V aimed at assessing the effects of daily feeding vs. feeding every third day on the performance of mature beef cows and calves. No differences between the treatments were observed in cow LW, BCS, milk production and calf LW. The serum concentrations of urea and long-chain fatty acids were increased on the third day after feeding in the cows fed every third day. Despite of that the feeding every third day is an acceptable feeding strategy for mature suckler cows. Experiment VI studied the effects of feeding levels and long-term cold climatic conditions on mature beef cows and calves. The cows were overwintered in outdoor facilities or in an uninsulated indoor facility. Whole-crop barley silage was offered either ad libitum or restricted. All the facilities offered adequate shelter for the cows. The restricted offering of whole-crop barley silage provided enough energy for the cows. The Finnish energy recommendations for dairy cows were too high for mature beef breed suckler cows in good body condition at housing, even in cold conditions. Therefore, there is need to determine feeding recommendations for suckler cows in Finland. The results showed that the required amount of energy can be offered to the cows using conventional or alternative feeds provided at a lower feeding level, with an inaccurate feeding, flat-rate feeding or feeding every third day strategy. The cows must have an opportunity to replenish the LW and BCS losses at pasture before the next winter. Production in cold conditions can be practised in inexpensive facilities when shelter against rain and wind, a dry resting place, adequate amounts of feed suitable for cold conditions and water are provided for the animals as was done in the present study.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.
Resumo:
The aim of this study was to explore the sociocultural value orientations of Finnish adolescents and their attitudes toward information society. In addition, this study explored the association between values and attitudes toward information society. I investigated whether values and attitudes follow social development and whether they can be divided into value categories such as traditional, modern and postmodern. This study falls into the category of youth research. The study uses a multimethodological approach and straddles the following disciplines: the science of education, religious education, sociology and social psychology. The theoretical context of the study is modernisation, understood as a two level process. The first level represents the transition from a religious-based traditional society to a modern industrial society. The second level of modernisation refers to the process of development established after the second world war, called postmodernisation, which is understood as the transition from an emphasis on economical imperatives to an emphasis on subjective well-being and the quality of life. Postmodernisation influences both social organisations and individuals´ values and worldviews. The target group of this survey-study comprised 408 16- to 19-year-old Finnish adolescent students from secondary school and vocational school. The data were gathered with a quantitative questionnaire during the second half of 2001. The results of the study can be generalised to the population of Finnish 16- to 19-year-olds. The data were analysed quantitatively using ANOVA and multivariate analyses such as cluster analysis, factor analysis and general linear modeling. Bayesian dependence modeling served to explore further how the values predict the attitudes toward information society. The results indicate that values are associated not only with attitudes toward information society, but with many other sociocultural indicator as well. Especially strong interpreting indicators included gender and identity or lifestyle questions. The results also indicate an association between values, attitudes and social development and a two-level modernisation process. Values formed traditional, modern and postmodern value systems. Keywords: values, attitudes, modernisation, information society, traditional, modern, postmodern