876 resultados para Random error
Resumo:
The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524) of test-day milk yield (TDMY) from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects), whereas the contemporary group, calving age (linear and quadratic effects) and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.
Resumo:
In this paper, we prove that a self-avoiding walk of infinite length provides a structure that would resolve Olbers' paradox. That is, if the stars of a universe were distributed like the vertices of an infinite random walk with each segment length of about a parsec, then the night sky could be as dark as actually observed on the Earth. Self-avoiding random walk structure can therefore resolve the Olbers' paradox even in a static universe.
Resumo:
La problemàtica jurídica-social que ha sorgit aquests darrers anys amb les permutes financeres i les participacions preferents ha fet plantejar si s'ha produït un error en el consentiment contractual amb aquest tipus de productes financers. A partir del contingut del Codi Civil espanyol i la doctrina, s'han analitzat els elements essencials del contracte, així com, la legislació aplicable als instruments financers. Amb l’ ajuda de la jurisprudència s'ha pogut comprovar que en la majoria de casos portats als tribunals en relació a aquests contractes, en els quals, es demana l'anul·labilitat contractual, el fonament principal es basa en la vulneració de les entitats de crèdit dels seus deures legals . En el present treball queda palesa la importància d'enllaçar l'element contractual del consentiment amb l'obligació que tenen les entitats de crèdit d'informar els seus clients. Així, la incorrecta formació sobre la realitat contractual que els clients manifesten amb el consentiment, passa sense cap dubte per la necessitat d'obtenir tota la informació rellevant del contracte. L’obligació d’informació està estretament lligada al deure de classificar als clients, totes dues són un compromís legal que tenen les entitats en la seva funció de lleialtat empresària. Les entitats financeres deuen per tant classificar els seus clients i proporcionals la informació, amb més rigor si cap , en el cas de clients minoristes. Per tot això, veiem que en aquells casos de clients minoristes en els quals no s'ha pogut demostrar per part de les entitats de crèdit que es va proporcionar tota la informació necessària, s'ha produït un error en el consentiment. Els clients no coneixien l’autèntic abast de la vinculació ni els costos als quals s'havia obligat , no hi ha dubte que en molts dels casos d'haver conegut la realitat, no haguessin contractat.
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.
Resumo:
Using numerical simulations we investigate how overall dimensions of random knots scale with their length. We demonstrate that when closed non-self-avoiding random trajectories are divided into groups consisting of individual knot types, then each such group shows the scaling exponent of approximately 0.588 that is typical for self-avoiding walks. However, when all generated knots are grouped together, their scaling exponent becomes equal to 0.5 (as in non-self-avoiding random walks). We explain here this apparent paradox. We introduce the notion of the equilibrium length of individual types of knots and show its correlation with the length of ideal geometric representations of knots. We also demonstrate that overall dimensions of random knots with a given chain length follow the same order as dimensions of ideal geometric representations of knots.
Resumo:
Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed. As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Resumo:
We study discrete-time models in which death benefits can depend on a stock price index, the logarithm of which is modeled as a random walk. Examples of such benefit payments include put and call options, barrier options, and lookback options. Because the distribution of the curtate-future-lifetime can be approximated by a linear combination of geometric distributions, it suffices to consider curtate-future-lifetimes with a geometric distribution. In binomial and trinomial tree models, closed-form expressions for the expectations of the discounted benefit payment are obtained for a series of options. They are based on results concerning geometric stopping of a random walk, in particular also on a version of the Wiener-Hopf factorization.
Resumo:
[cat] Estudiem les propietats teòriques que una funció d.emparellament ha de satisfer per tal de representar un mercat laboral amb friccions dins d'un model d'equilibri general amb emparellament aleatori. Analitzem el cas Cobb-Douglas, CES i altres formes funcionals per a la funció d.emparellament. Els nostres resultats estableixen restriccions sobre els paràmetres d'aquests formes funcionals per assegurar que l.equilibri és interior. Aquestes restriccions aporten raons teòriques per escollir entre diverses formes funcionals i permeten dissenyar tests d'error d'especificació de model en els treballs empírics.
Resumo:
[cat] Estudiem les propietats teòriques que una funció d.emparellament ha de satisfer per tal de representar un mercat laboral amb friccions dins d'un model d'equilibri general amb emparellament aleatori. Analitzem el cas Cobb-Douglas, CES i altres formes funcionals per a la funció d.emparellament. Els nostres resultats estableixen restriccions sobre els paràmetres d'aquests formes funcionals per assegurar que l.equilibri és interior. Aquestes restriccions aporten raons teòriques per escollir entre diverses formes funcionals i permeten dissenyar tests d'error d'especificació de model en els treballs empírics.