995 resultados para Simulation mastication effort
Resumo:
This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.
Resumo:
The aim of this master's thesis is to develop a two-dimensional drift-di usion model, which describes charge transport in organic solar cells. The main bene t of a two-dimensional model compared to a one-dimensional one is the inclusion of the nanoscale morphology of the active layer of a bulk heterojunction solar cell. The developed model was used to study recombination dynamics at the donor-acceptor interface. In some cases, it was possible to determine e ective parameters, which reproduce the results of the two-dimensional model in the one-dimensional case. A summary of the theory of charge transport in semiconductors was presented and discussed in the context of organic materials. Additionally, the normalization and discretization procedures required to nd a numerical solution to the charge transport problem were outlined. The charge transport problem was solved by implementing an iterative scheme called successive over-relaxation. The obtained solution is given as position-dependent electric potential, free charge carrier concentrations and current densities in the active layer. An interfacial layer, separating the pure phases, was introduced in order to describe charge dynamics occurring at the interface between the donor and acceptor. For simplicity, an e ective generation of free charge carriers in the interfacial layer was implemented. The pure phases simply act as transport layers for the photogenerated charges. Langevin recombination was assumed in the two-dimensional model and an analysis of the apparent recombination rate in the one-dimensional case is presented. The recombination rate in a two-dimensional model is seen to e ectively look like reduced Langevin recombination at open circuit. Replicating the J-U curves obtained in the two-dimensional model is, however, not possible by introducing a constant reduction factor in the Langevin recombination rate. The impact of an acceptor domain in the pure donor phase was investigated. Two cases were considered, one where the acceptor domain is isolated and another where it is connected to the bulk of the acceptor. A comparison to the case where no isolated domains exist was done in order to quantify the observed reduction in the photocurrent. The results show that all charges generated at the isolated domain are lost to recombination, but the domain does not have a major impact on charge transport. Trap-assisted recombination at interfacial trap states was investigated, as well as the surface dipole caused by the trapped charges. A theoretical expression for the ideality factor n_id as a function of generation was derived and shown to agree with simulation data. When the theoretical expression was fitted to simulation data, no interface dipole was observed.
Resumo:
The aim of this research is to develop a tool that could allow to organize coopetitional relationships between organizations on the basis of two-sided Internet platform. The main result of current master thesis is a detailed description of the concept of the lead generating internet platform-based coopetition. With the tools of agent-based modelling and simulation, there were obtained results that could be used as a base for suggestion that the developed concept is able to cause a positive effect on some particular industries (e.g. web-design studios market) and potentially can bring some benefits and extra profitability for most companies that operate on this particular industry. Also on the basis of the results it can be assumed that the developed instrument is also able to increase the degree of transparency of the market to which it is applied.
Resumo:
To date there is no documented procedure to extrapolate findings of an isometric nature to a whole body performance setting. The purpose of this study was to quantify the reliability of perceived exertion to control neuromuscular output during an isometric contraction. 21 varsity athletes completed a maximal voluntary contraction and a 2 min constant force contraction at both the start and end of the study. Between pre and post testing all participants completed a 2 min constant perceived exertion contraction once a day for 4 days. Intra-class correlation coefficient (R=O.949) and standard error of measurement (SEM=5.12 Nm) concluded that the isometric contraction was reliable. Limits of agreement demonstrated only moderate initial reliability, yet with smaller limits towards the end of 4 training sessions. In conclusion, athlete's na"ive to a constant effort isometric contraction will produce reliable and acceptably stable results after 1 familiarization sessions has been completed.
Resumo:
The current study was an exploration of why some novices are more successful than their peers when learning from the Internet by examining the relations among time spent with relevant information and changes in invested mental effort during Internet navigations as well as achievement. Navigation behaviours and learner characteristics were investigated as predictors of time spent with relevant information and changes in mental effort. Undergraduates (N = 85, Mage = 20 years, 5 months) searched the Internet for information corresponding to a low knowledge topic for 20 min while their eye gaze and pupil size were recorded. Pupil diameter was used as an objective, continuous measure of mental effort. Participants also completed questionnaires or computer tasks pertaining to s e l f-regulated learning characteristics (general intrinsic goal orientation and effort regulation) and cognitive factors (working memory control, distractibility and cognitive style). All analyses controlled for general mental ability, reading comprehension, topic and Internet knowledge, and overall motivation. A greater proportion of time spent with relevant information predicted higher scores on an achievement test. Interestingly, time spent with relevant information partially mediated the positive relation between the frequency of increases in invested mental effort and achievement. Surprisingly, intrinsic goal orientation was negatively related to time spent with relevant information and effort regulation was negatively related to the frequency of increases in invested mental effort. These findings have implications for supports when novices guide their own learning, especially when using the Internet.
Resumo:
The Robocup Rescue Simulation System (RCRSS) is a dynamic system of multi-agent interaction, simulating a large-scale urban disaster scenario. Teams of rescue agents are charged with the tasks of minimizing civilian casualties and infrastructure damage while competing against limitations on time, communication, and awareness. This thesis provides the first known attempt of applying Genetic Programming (GP) to the development of behaviours necessary to perform well in the RCRSS. Specifically, this thesis studies the suitability of GP to evolve the operational behaviours required of each type of rescue agent in the RCRSS. The system developed is evaluated in terms of the consistency with which expected solutions are the target of convergence as well as by comparison to previous competition results. The results indicate that GP is capable of converging to some forms of expected behaviour, but that additional evolution in strategizing behaviours must be performed in order to become competitive. An enhancement to the standard GP algorithm is proposed which is shown to simplify the initial search space allowing evolution to occur much quicker. In addition, two forms of population are employed and compared in terms of their apparent effects on the evolution of control structures for intelligent rescue agents. The first is a single population in which each individual is comprised of three distinct trees for the respective control of three types of agents, the second is a set of three co-evolving subpopulations one for each type of agent. Multiple populations of cooperating individuals appear to achieve higher proficiencies in training, but testing on unseen instances raises the issue of overfitting.
Resumo:
The primary objectives of the present study were 1) to examine the relationship between health-enhancing physical activity (HEPA) and well-being across the previous day and 2) to examine the role of basic psychological need satisfaction as a potential mediator of the HEPA – well-being relationship. Participants (N = 203) were a convenience sample of undergraduate students with data collected cross sectionally. HEPA was generally associated with well-being (r‟s ranged from .18 to .62). Multiple mediation analyses supported psychological need satisfaction as mechanisms underpinning the HEPA – well- being relationship. Subsequent analyses demonstrated that effort put forth in HEPA activities, as opposed to frequency or duration, uniquely predicted well-being. The role of effort was further highlighted in the multiple mediation analyses. As such future research may wish to investigate the utility of a HEPA program that facilitates effortful engagement and fulfillment of basic psychological needs.
Resumo:
This study has two main objectives. First, the phlebotomy process at the St. Catharines Site of the Niagara Health System is investigated, which starts when an order for a blood test is placed, and ends when the specimen arrives at the lab. The performance measurement is the flow time of the process, which reflects concerns and interests of both the hospital and the patients. Three popular operational methodologies are applied to reduce the flow time and improve the process: DMAIC from Six Sigma, lean principles and simulation modeling. Potential suggestions are provided for the St. Catharines Site, which could result in an average of seven minutes reduction in the flow time. The second objective addresses the fact that these three methodologies have not been combined before in a process improvement effort. A structured framework combining them is developed to benefit future study of phlebotomy and other hospital processes.
Resumo:
UANL
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.