13 resultados para Dynamic nonlinear
em Helda - Digital Repository of University of Helsinki
Resumo:
Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
Protein conformations and dynamics can be studied by nuclear magnetic resonance spectroscopy using dilute liquid crystalline samples. This work clarifies the interpretation of residual dipolar coupling data yielded by the experiments. It was discovered that unfolded proteins without any additional structure beyond that of a mere polypeptide chain exhibit residual dipolar couplings. Also, it was found that molecular dynamics induce fluctuations in the molecular alignment and doing so affect residual dipolar couplings. The finding clarified the origins of low order parameter values observed earlier. The work required the development of new analytical and computational methods for the prediction of intrinsic residual dipolar coupling profiles for unfolded proteins. The presented characteristic chain model is able to reproduce the general trend of experimental residual dipolar couplings for denatured proteins. The details of experimental residual dipolar coupling profiles are beyond the analytical model, but improvements are proposed to achieve greater accuracy. A computational method for rapid prediction of unfolded protein residual dipolar couplings was also developed. Protein dynamics were shown to modulate the effective molecular alignment in a dilute liquid crystalline medium. The effects were investigated from experimental and molecular dynamics generated conformational ensembles of folded proteins. It was noted that dynamics induced alignment is significant especially for the interpretation of molecular dynamics in small, globular proteins. A method of correction was presented. Residual dipolar couplings offer an attractive possibility for the direct observation of protein conformational preferences and dynamics. The presented models and methods of analysis provide significant advances in the interpretation of residual dipolar coupling data from proteins.
Resumo:
This thesis concerns the dynamics of nanoparticle impacts on solid surfaces. These impacts occur, for instance, in space, where micro- and nanometeoroids hit surfaces of planets, moons, and spacecraft. On Earth, materials are bombarded with nanoparticles in cluster ion beam devices, in order to clean or smooth their surfaces, or to analyse their elemental composition. In both cases, the result depends on the combined effects of countless single impacts. However, the dynamics of single impacts must be understood before the overall effects of nanoparticle radiation can be modelled. In addition to applications, nanoparticle impacts are also important to basic research in the nanoscience field, because the impacts provide an excellent case to test the applicability of atomic-level interaction models to very dynamic conditions. In this thesis, the stopping of nanoparticles in matter is explored using classical molecular dynamics computer simulations. The materials investigated are gold, silicon, and silica. Impacts on silicon through a native oxide layer and formation of complex craters are also simulated. Nanoparticles up to a diameter of 20 nm (315000 atoms) were used as projectiles. The molecular dynamics method and interatomic potentials for silicon and gold are examined in this thesis. It is shown that the displacement cascade expansionmechanism and crater crown formation are very sensitive to the choice of atomic interaction model. However, the best of the current interatomic models can be utilized in nanoparticle impact simulation, if caution is exercised. The stopping of monatomic ions in matter is understood very well nowadays. However, interactions become very complex when several atoms impact on a surface simultaneously and within a short distance, as happens in a nanoparticle impact. A high energy density is deposited in a relatively small volume, which induces ejection of material and formation of a crater. Very high yields of excavated material are observed experimentally. In addition, the yields scale nonlinearly with the cluster size and impact energy at small cluster sizes, whereas in macroscopic hypervelocity impacts, the scaling 2 is linear. The aim of this thesis is to explore the atomistic mechanisms behind the nonlinear scaling at small cluster sizes. It is shown here that the nonlinear scaling of ejected material yield disappears at large impactor sizes because the stopping mechanism of nanoparticles gradually changes to the same mechanism as in macroscopic hypervelocity impacts. The high yields at small impactor size are due to the early escape of energetic atoms from the hot region. In addition, the sputtering yield is shown to depend very much on the spatial initial energy and momentum distributions that the nanoparticle induces in the material in the first phase of the impact. At the later phases, the ejection of material occurs by several mechanisms. The most important mechanism at high energies or at large cluster sizes is atomic cluster ejection from the transient liquid crown that surrounds the crater. The cluster impact dynamics detected in the simulations are in agreement with several recent experimental results. In addition, it is shown that relatively weak impacts can induce modifications on the surface of an amorphous target over a larger area than was previously expected. This is a probable explanation for the formation of the complex crater shapes observed on these surfaces with atomic force microscopy. Clusters that consist of hundreds of thousands of atoms induce long-range modifications in crystalline gold.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This paper examines how volatility in financial markets can preferable be modeled. The examination investigates how good the models for the volatility, both linear and nonlinear, are in absorbing skewness and kurtosis. The examination is done on the Nordic stock markets, including Finland, Sweden, Norway and Denmark. Different linear and nonlinear models are applied, and the results indicates that a linear model can almost always be used for modeling the series under investigation, even though nonlinear models performs slightly better in some cases. These results indicate that the markets under study are exposed to asymmetric patterns only to a certain degree. Negative shocks generally have a more prominent effect on the markets, but these effects are not really strong. However, in terms of absorbing skewness and kurtosis, nonlinear models outperform linear ones.
Resumo:
In order to bring insight into the emerging concept of relationship communication, concepts from two research traditions will be combined in this paper. Based on those concepts a new model, the dynamic relationship communication model, will be presented. Instead of a company perspective focusing on the integration of outgoing messages such as advertising, public relations and sales activities, it is suggested that the focus should be on factors integrated by the receiver. Such factors can be historical, future, external and internal factors. Thus, the model put a strong focus on the receiver in the communication process. The dynamic communication model is illustrated empirically using it as a tool on 78 short stories about communication. The empirical findings show that relationship communication occurs in some cases; in some cases it does not occur. The model is a useful tool in displaying relationship communication and how it differs from other communication. The importance of the time dimension, historical and future factors, in relationship communications is discussed. The possibility of reducing communications costs by the notion of relationship communication is discussed in managerial implications.
Resumo:
The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.
Resumo:
In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).
Resumo:
In this study we analyze how the ion concentrations in forest soil solution are determined by hydrological and biogeochemical processes. A dynamic model ACIDIC was developed, including processes common to dynamic soil acidification models. The model treats up to eight interacting layers and simulates soil hydrology, transpiration, root water and nutrient uptake, cation exchange, dissolution and reactions of Al hydroxides in solution, and the formation of carbonic acid and its dissociation products. It includes also a possibility to a simultaneous use of preferential and matrix flow paths, enabling the throughfall water to enter the deeper soil layers in macropores without first reacting with the upper layers. Three different combinations of routing the throughfall water via macro- and micropores through the soil profile is presented. The large vertical gradient in the observed total charge was simulated succesfully. According to the simulations, gradient is mostly caused by differences in the intensity of water uptake, sulfate adsorption and organic anion retention at the various depths. The temporal variations in Ca and Mg concentrations were simulated fairly well in all soil layers. For H+, Al and K there were much more variation in the observed than in the simulated concentrations. Flow in macropores is a possible explanation for the apparent disequilibrium of the cation exchange for H+ and K, as the solution H+ and K concentrations have great vertical gradients in soil. The amount of exchangeable H+ increased in the O and E horizons and decreased in the Bs1 and Bs2 horizons, the net change in whole soil profile being a decrease. A large part of the decrease of the exchangeable H+ in the illuvial B horizon was caused by sulfate adsorption. The model produces soil water amounts and solution ion concentrations which are comparable to the measured values, and it can be used in both hydrological and chemical studies of soils.