965 resultados para Error in substance
Resumo:
Segment poses and joint kinematics estimated from skin markers are highly affected by soft tissue artifact (STA) and its rigid motion component (STARM). While four marker-clusters could decrease the STA non-rigid motion during gait activity, other data, such as marker location or STARM patterns, would be crucial to compensate for STA in clinical gait analysis. The present study proposed 1) to devise a comprehensive average map illustrating the spatial distribution of STA for the lower limb during treadmill gait and 2) to analyze STARM from four marker-clusters assigned to areas extracted from spatial distribution. All experiments were realized using a stereophotogrammetric system to track the skin markers and a bi-plane fluoroscopic system to track the knee prosthesis. Computation of the spatial distribution of STA was realized on 19 subjects using 80 markers apposed on the lower limb. Three different areas were extracted from the distribution map of the thigh. The marker displacement reached a maximum of 24.9mm and 15.3mm in the proximal areas of thigh and shank, respectively. STARM was larger on thigh than the shank with RMS error in cluster orientations between 1.2° and 8.1°. The translation RMS errors were also large (3.0mm to 16.2mm). No marker-cluster correctly compensated for STARM. However, the coefficient of multiple correlations exhibited excellent scores between skin and bone kinematics, as well as for STARM between subjects. These correlations highlight dependencies between STARM and the kinematic components. This study provides new insights for modeling STARM for gait activity.
Resumo:
The approaches are part of the everyday of the Physical Chemistry. In many didactic books in the area of Chemistry, the approaches are validated starting from qualitative and not quantitative approaches. We elaborated some examples that allow evaluating the quantitative impact of the approaches, being considered the mistake tolerated for the approximate calculation. The estimate of the error in the approaches should serve as guide to establish the validity of the calculation, which use them. Thus, the shortcut that represents a calculation approached to substitute accurate calculations; it can be used without it loses of quality in the results, besides indicating, as they are valid the adopted criterions.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
Nykyisessä valmistusteollisuudessa erilaisten robottien ja automatisoitujen tuotantovaiheiden rooli on erittäin merkittävä. Tarkasti suunnitellut liikkeet ja toimintavaiheet voidaan nykyisillä järjestelmillä ajoittaa tarkasti toisiinsa nähden, jolloin erilaisten virhetilanteidenkin sattuessa järjestelmä pystyy toimimaan tilanteen edellyttämällä tavalla. Automatisoinnin etuna on myös tuotannon muokkaaminen erilaisten tuotteiden valmistamiseen pienillä muutoksilla, jolloin tuotantokustannukset pysyvät matalina myös pienten valmistuserien tapauksissa. Usean akselin laitteissa eli niin sanotuissa moniakselikäytöissä laitteen toimintatarkkuus riippuu jokaisen liikeakselin tarkkuudesta. Liikkeenohjauksessa on perinteisesti ollut käytössä myötäkytketty paikkakaskadi, jonka virityksessä otetaan huomioon akselilla olevat erilaiset dynaamiset tilat ja käytettävät referenssit. Monissa nykyisissä hajautetuissa järjestelmissä eli moniakselikäytöissä, joissa jokaiselle akselille on oma ohjauslaite, ei yksittäisen akselin paikkavirhettä huomioida muiden akseleiden ohjauksessa. Työssä tutkitaan erilaisia moniakselijärjestelmien ohjausmenetelmiä ja myötäkytketyn paikkakaskadin toimintaa moniakselikäytössä pyritään parantamaan tuomalla paikkasäätimen rinnalle toinen säädin, jonka tulona on akseleiden välinen paikkaero.
Resumo:
ABSTRACT This study aimed to compare thematic maps of soybean yield for different sampling grids, using geostatistical methods (semivariance function and kriging). The analysis was performed with soybean yield data in t ha-1 in a commercial area with regular grids with distances between points of 25x25 m, 50x50 m, 75x75 m, 100x100 m, with 549, 188, 66 and 44 sampling points respectively; and data obtained by yield monitors. Optimized sampling schemes were also generated with the algorithm called Simulated Annealing, using maximization of the overall accuracy measure as a criterion for optimization. The results showed that sample size and sample density influenced the description of the spatial distribution of soybean yield. When the sample size was increased, there was an increased efficiency of thematic maps used to describe the spatial variability of soybean yield (higher values of accuracy indices and lower values for the sum of squared estimation error). In addition, more accurate maps were obtained, especially considering the optimized sample configurations with 188 and 549 sample points.
Resumo:
In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.
Resumo:
Pulse Response Based Control (PRBC) is a recently developed minimum time control method for flexible structures. The flexible behavior of the structure is represented through a set of discrete time sequences, which are the responses of the structure due to rectangular force pulses. The rectangular force pulses are given by the actuators that control the structure. The set of pulse responses, desired outputs, and force bounds form a numerical optimization problem. The solution of the optimization problem is a minimum time piecewise constant control sequence for driving the system to a desired final state. The method was developed for driving positive semi-definite systems. In case the system is positive definite, some final states of the system may not be reachable. Necessary conditions for reachability of the final states are derived for systems with a finite number of degrees of freedom. Numerical results are presented that confirm the derived analytical conditions. Numerical simulations of maneuvers of distributed parameter systems have shown a relationship between the error in the estimated minimum control time and sampling interval
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Mergers are often used purposeful strategic tool. Previous research has mainly concentrated on actu-al M&A process and to elements leading to that decision. Purpose of this study is to approach de-merger of Cloetta Fazer and research what were the reasons that lead to demerger. Problem is ap-proached by first evaluating was the merger an success in the first place, as described in earlier stud-ies, media and academic research. From this information the motives for demerger are approached. Research material for study is collected from second hand sources. Research data-sample was di-vided into two categories: Timeline Sample & Additional Information Sample. Timeline sample was collected by systematically collecting news from Sanoma News database and Alma News database. This data was indexed and a timeline was constructed. From this timeline key dates, themes and elements were identified and further data gathering was concentrated based on those themes. Results of the study suggest that Merger was not as great success as it was described in earlier years. Explanations why merger ended up into demerger vary greatly. From material key factor, lack of, or error in, long term strategic planning was identified. This was due to death in family during strategy creation and mistakes made in pre-merger phase.
Resumo:
Tukin mittaus ennen sahausta ja sahausasetteen optimointi on kehittynyt paljon viimeisen 10 vuoden aikana. Sahauksen kannattavuuden huonontuessa raaka-aineen tehokas hyödyntäminen on muodostunut tärkeäksi osaksi prosessia. Mittalaitteiden tekniikan kehityttyä on ollut mahdollista mitata tukin muoto ja halkaisijat eri kohdista entistä tarkemmin. Sahausasetteen optimoinnilla pyritään raaka-aineen mahdollisimman tehokkaaseen käyttöön eli saamaan mahdollisimman hyvä saanto jokaisesta yksittäisestä sahatusta tukista. Mittaustarkkuus on suoraan kytköksissä sahausasetteen optimointi tuloksen onnistumiseen. Yleisesti tukin mittaus ennen sahausta ja sahausasetteen optimointi tulevat samalta toimittajalta. Työssä tarkasteltiin kahden eri toimittajan tukkimittareita sekä optimoinnin onnistumista sen perusteella. Käytössä oli lasikuituinen mallitukki, jota mitattiin kummankin toimittajan mittareilla. Näin voitiin suoraan vertailla mittauksen ja optimoinnin onnistumista ja verrata sitä optimaalisiin tuloksiin. Työssä käytettiin kandidaatintyössä luomaani toimintamallia tukkimittarin tarkkuuden toteamiseksi. Mittaus- ja optimointivirheistä pystyttiin laskemaan, kuinka paljon tappiota sahalaitokselle aiheutui verrattuna optimaaliseen mittaus- ja optimointitulokseen. Jo pienetkin virheet optimoinnissa ja mittauksessa vaikuttavat sahauksen kannattavuuteen, kun tarkastellaan sahalaitosta jossa sahataan 8000 – 10 000 tukkia yhden työvuoron aikana. Tulosten perusteella mittarit mittaavat hieman virheellisesti, ja kummankin mittarin mittausten perusteella saatiin eri sahausasete optimointitulokseksi. Mittavirheen takia voitiin todeta, että parantamalla mittaustarkkuutta voidaan sahauksen kannattavuutta parantaa.
Resumo:
Variantti A.
Resumo:
In this study, a neuro-fuzzy estimator was developed for the estimation of biomass concentration of the microalgae Synechococcus nidulans from initial batch concentrations, aiming to predict daily productivity. Nine replica experiments were performed. The growth was monitored daily through the culture medium optic density and kept constant up to the end of the exponential phase. The network training followed a full 3³ factorial design, in which the factors were the number of days in the entry vector (3,5 and 7 days), number of clusters (10, 30 and 50 clusters) and internal weight softening parameter (Sigma) (0.30, 0.45 and 0.60). These factors were confronted with the sum of the quadratic error in the validations. The validations had 24 (A) and 18 (B) days of culture growth. The validations demonstrated that in long-term experiments (Validation A) the use of a few clusters and high Sigma is necessary. However, in short-term experiments (Validation B), Sigma did not influence the result. The optimum point occurred within 3 days in the entry vector, 10 clusters and 0.60 Sigma and the mean determination coefficient was 0.95. The neuro-fuzzy estimator proved a credible alternative to predict the microalgae growth.
Resumo:
A new method for sampling the exact (within the nodal error) ground state distribution and nondiflPerential properties of multielectron systems is developed and applied to firstrow atoms. Calculated properties are the distribution moments and the electronic density at the nucleus (the 6 operator). For this purpose, new simple trial functions are developed and optimized. First, using Hydrogen as a test case, we demonstrate the accuracy of our algorithm and its sensitivity to error in the trial function. Applications to first row atoms are then described. We obtain results which are more satisfactory than the ones obtained previously using Monte Carlo methods, despite the relative crudeness of our trial functions. Also, a comparison is made with results of highly accurate post-Hartree Fock calculations, thereby illuminating the nodal error in our estimates. Taking into account the CPU time spent, our results, particularly for the 8 operator, have a relatively large variance. Several ways of improving the eflSciency together with some extensions of the algorithm are suggested.
Resumo:
Résumé L’hypothèse de la période critique, émise par Lenneberg dans les années 60, affirmait qu’un enfant pouvait acquérir une langue seconde, sans difficulté, environ jusqu’à l’âge de la puberté. Après cette période, l’apprentissage d’un autre idiome serait difficile, dû à la latéralisation du cerveau. En même temps, les travaux de Chomsky enrichirent cette théorie avec l’idée de la Grammaire universelle, laquelle établit que nous possédons tous, dès la naissance, les éléments linguistiques universels qui nous permettent d’acquérir une langue maternelle. Tant que la Grammaire universelle est active, notre langue maternelle se développe et c’est pourquoi, si nous apprenons une autre langue pendant cette période, l’acquisition de celle-ci se produit de manière presque naturelle. Pour cette raison, plus une langue est apprise tôt, plus elle sera maîtrisée avec succès. En nous appuyant sur ce cadre théorique ainsi que sur l’Analyse d’erreurs, outil qui permet au professeur de prédire quelques erreurs avec la finalité de créer des stratégies d’apprentissage d’une langue seconde, nous tenterons de vérifier dans le présent travail si l’âge est un facteur qui influence positivement ou négativement l’apprentissage d’une langue seconde, l’espagnol dans ce cas-ci, par le biais de l’analyse comparative des prépositions a/ en dans deux groupes d’étudiants différents.
Resumo:
L’objectif de notre recherche est l’exploration et l’étude de la question de l’instrumentation informatique des projets de reconstitution archéologiques en architecture monumentale dans le but de proposer de nouveaux moyens. La recherche a pour point de départ une question, à savoir : « Comment, et avec quels moyens informatiques, les projets de reconstitution architecturale pourraient-ils être menés en archéologie? ». Cette question a nécessité, en premier lieu, une étude des différentes approches de restitution qui ont été mises à contribution pour des projets de reconstitution archéologiques, et ceci, à ses différentes phases. Il s’agit de comprendre l’évolution des différentes méthodologies d’approche (épistémologiquement) que les acteurs de ce domaine ont adoptées afin de mettre à contribution les technologies d’information et de communication (TIC) dans le domaine du patrimoine bâti. Cette étude nous a permis de dégager deux principales avenues: une première qui vise exclusivement la « représentation » des résultats des projets et une seconde qui vise la modélisation de ce processus dans le but d’assister l’archéologue dans les différentes phases du projet. Nous démontrons que c’est la deuxième approche qui permet la combinaison et met à la disposition des archéologues une meilleure exploitation des possibilités que l’outil informatique peut et pourra présenter. Cette partie permet de démontrer la nature systémique et complexe de la mise à contribution des TICs dans le domaine de la restitution archéologique. La multitude des acteurs, des conditions techniques, culturelles et autres, des moyens utilisés ainsi que la variété des objectifs envisagés dans les projets de reconstitution archéologiques poussent à explorer une nouvelle approche qui tient compte de cette complexité. Pour atteindre notre objectif de recherche, la poursuite de l’étude de la nature de la démarche archéologique s’impose. Il s’agit de comprendre les liens et les interrelations qui s’établissent entre les différentes unités techniques et intellectuelles en jeu ainsi que les différents modes de réflexions présents dans les projets de reconstitution archéologique du patrimoine bâti. Cette étude met en évidence le rapport direct entre le caractère subjectif de la démarche avec la grande variabilité des approches et des raisonnements mis en œuvre. La recherche est alors exploratoire et propositionnelle pour confronter notamment le caractère systémique et complexe de l’expérience concrète et à travers les publications savantes, les éléments de la réalité connaissable. L’étude des raisonnements archéologiques à travers les publications savantes nous permet de proposer une première typologie de raisonnements étudiés. Chacune de ces typologies reflète une méthodologie d’approche basée sur une organisation d’actions qui peut être consignée dans un ensemble de modules de raisonnements. Cette recherche fait ressortir, des phénomènes et des processus observés, un modèle qui représente les interrelations et les interactions ainsi que les produits spécifiques de ces liaisons complexes. Ce modèle témoigne d’un processus récursif, par essais et erreurs, au cours duquel l’acteur « expérimente » successivement, en fonction des objectifs de l’entreprise et à travers des modules de raisonnements choisis, plusieurs réponses aux questions qui se posent à lui, au titre de la définition du corpus, de la description, de la structuration, de l’interprétation et de la validation des résultats, jusqu’à ce que cette dernière lui paraisse satisfaire aux objectifs de départ. Le modèle établi est validé à travers l’étude de cas du VIIème pylône du temple de Karnak en Égypte. Les résultats obtenus montrent que les modules de raisonnements représentent une solution intéressante pour assister les archéologues dans les projets de reconstitution archéologiques. Ces modules offrent une multiplicité de combinaisons des actions et avantagent ainsi une diversité d’approches et de raisonnements pouvant être mis à contribution pour ces projets tout en maintenant la nature évolutive du système global.