862 resultados para Error correction model
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.
Resumo:
We investigated adaptive neural control of precision grip forces during object lifting. A model is presented that adjusts reactive and anticipatory grip forces to a level just above that needed to stabilize lifted objects in the hand. The model obeys priciples of cerebellar structure and function by using slip sensations as error signals to adapt phasic motor commands to tonic force generators associated with output synergies controlling grip aperture. The learned phasic commands are weight and texture-dependent. Simulations of the new curcuit model reproduce key aspects of experimental observations of force application. Over learning trials, the onset of grip force buildup comes to lead the load force buildup, and the rate-of-rise of grip force, but not load force, scales inversely with the friction of the gripped object.
Resumo:
Technological advances in genotyping have given rise to hypothesis-based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally "validated" in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.
Resumo:
A model of telescoping is proposed that assumes no systematic errors in dating. Rather, the overestimation of recent occurrences of events is based on the combination of three factors: (1) Retention is greater for recent events; (2) errors in dating, though unbiased, increase linearly with the time since the dated event; and (3) intrusions often occur from events outside the period being asked about, but such intrusions do not come from events that have not yet occurred. In Experiment 1, we found that recall for colloquia fell markedly over a 2-year interval, the magnitude of errors in psychologists' dating of the colloquia increased at a rate of .4 days per day of delay, and the direction of the dating error was toward the middle of the interval. In Experiment 2, the model used the retention function and dating errors from the first study to predict the distribution of the actual dates of colloquia recalled as being within a 5-month period. In Experiment 3, the findings of the first study were replicated with colloquia given by, instead of for, the subjects.
Resumo:
An aerodynamic sound source extraction from a general flow field is applied to a number of model problems and to a problem of engineering interest. The extraction technique is based on a variable decomposition, which results to an acoustic correction method, of each of the flow variables into a dominant flow component and a perturbation component. The dominant flow component is obtained with a general-purpose Computational Fluid Dynamics (CFD) code which uses a cell-centred finite volume method to solve the Reynolds-averaged Navier–Stokes equations. The perturbations are calculated from a set of acoustic perturbation equations with source terms extracted from unsteady CFD solutions at each time step via the use of a staggered dispersion-relation-preserving (DRP) finite-difference scheme. Numerical experiments include (1) propagation of a 1-D acoustic pulse without mean flow, (2) propagation of a 2-D acoustic pulse with/without mean flow, (3) reflection of an acoustic pulse from a flat plate with mean flow, and (4) flow-induced noise generated by the an unsteady laminar flow past a 2-D cavity. The computational results demonstrate the accuracy for model problems and illustrate the feasibility for more complex aeroacoustic problems of the source extraction technique.
Resumo:
The aim of the current study was to evaluate the potential of the dynamic lipolysis model to simulate the absorption of a poorly soluble model drug compound, probucol, from three lipid-based formulations and to predict the in vitro-in vivo correlation (IVIVC) using neuro-fuzzy networks. An oil solution and two self-micro and nano-emulsifying drug delivery systems were tested in the lipolysis model. The release of probucol to the aqueous (micellar) phase was monitored during the progress of lipolysis. These release profiles compared with plasma profiles obtained in a previous bioavailability study conducted in mini-pigs at the same conditions. The release rate and extent of release from the oil formulation were found to be significantly lower than from SMEDDS and SNEDDS. The rank order of probucol released (SMEDDS approximately SNEDDS > oil formulation) was similar to the rank order of bioavailability from the in vivo study. The employed neuro-fuzzy model (AFM-IVIVC) achieved significantly high prediction ability for different data formations (correlation greater than 0.91 and prediction error close to zero), without employing complex configurations. These preliminary results suggest that the dynamic lipolysis model combined with the AFM-IVIVC can be a useful tool in the prediction of the in vivo behavior of lipid-based formulations.
Resumo:
Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.
Resumo:
In a model commonly used in dynamic traffic assignment the link travel time for a vehicle entering a link at time t is taken as a function of the number of vehicles on the link at time t. In an alternative recently introduced model, the travel time for a vehicle entering a link at time t is taken as a function of an estimate of the flow in the immediate neighbourhood of the vehicle, averaged over the time the vehicle is traversing the link. Here we compare the solutions obtained from these two models when applied to various inflow profiles. We also divide the link into segments, apply each model sequentially to the segments and again compare the results. As the number of segments is increased, the discretisation refined to the continuous limit, the solutions from the two models converge to the same solution, which is the solution of the Lighthill, Whitham, Richards (LWR) model for traffic flow. We illustrate the results for different travel time functions and patterns of inflows to the link. In the numerical examples the solutions from the second of the two models are closer to the limit solutions. We also show that the models converge even when the link segments are not homogeneous, and introduce a correction scheme in the second model to compensate for an approximation error, hence improving the approximation to the LWR model.
Resumo:
In the IEEE 802.11 MAC layer protocol, there are different trade-off points between the number of nodes competing for the medium and the network capacity provided to them. There is also a trade-off between the wireless channel condition during the transmission period and the energy consumption of the nodes. Current approaches at modeling energy consumption in 802.11 based networks do not consider the influence of the channel condition on all types of frames (control and data) in the WLAN. Nor do they consider the effect on the different MAC and PHY schemes that can occur in 802.11 networks. In this paper, we investigate energy consumption corresponding to the number of competing nodes in IEEE 802.11's MAC and PHY layers in error-prone wireless channel conditions, and present a new energy consumption model. Analysis of the power consumed by each type of MAC and PHY over different bit error rates shows that the parameters in these layers play a critical role in determining the overall energy consumption of the ad-hoc network. The goal of this research is not only to compare the energy consumption using exact formulae in saturated IEEE 802.11-based DCF networks under varying numbers of competing nodes, but also, as the results show, to demonstrate that channel errors have a significant impact on the energy consumption.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
A Newton–Raphson solution scheme with a stress point algorithm is presented for the implementation of an elastic–viscoplastic soilmodel in a finite element program. Viscoplastic strain rates are calculated using the stress and volumetric states of the soil. Sub-incrementsof time are defined for each iterative calculation of elastic–viscoplastic stress changes so that their sum adds up to the time incrementfor the load step. This carefully defined ‘iterative time’ ensures that the correct amount of viscoplastic straining is accumulated overthe applied load step. The algorithms and assumptions required to implement the solution scheme are provided. Verification of the solutionscheme is achieved by using it to analyze typical boundary value problems.
Resumo:
The development of computer-based devices for music control has created a need to study how spectators understand new performance technologies and practices. As a part of a larger project examining how interactions with technology can be communicated to spectators, we present a model of a spectator's understanding of error by a performer. This model is broadly applicable throughout HCI, as interactions with technology are increasingly public and spectatorship is becoming more common.
Resumo:
This study evaluates the implementation of Menter's gamma-Re-theta Transition Model within the CFX12 solver for turbulent transition prediction on a natural laminar flow nacelle. Some challenges associated with this type of modeling have been identified. The computational fluid dynamics transitional flow simulation results are presented for a series of cruise cases with freestream Mach numbers ranging from 0.8 to 0.88, angles of attack from 2 to 0 degrees, and mass flow ratios from 0.60 to 0.75. These were validated with a series of wind-tunnel tests on the nacelle by comparing the predicted and experimental surface pressure distributions and transition locations. A selection of the validation cases are presented in this paper. In all cases, computational fluid dynamics simulations agreed reasonably well with the experiments. The results indicate that Menter's gamma-Re-theta Transition Model is capable of predicting laminar boundary-layer transition to turbulence on a nacelle. Nonetheless, some limitations exist in both the Menter's gamma-Re-theta Transition Model and in the implementation of the computational fluid dynamics model. The implementation of a more comprehensive experimental correlation in Menter's gamma-Re-theta Transition Model, preferably the ones from nacelle experiments, including the effects of compressibility and streamline curvature, is necessary for an accurate transitional flow simulation on a nacelle. In addition, improvements to the computational fluid dynamics model are also suggested, including the consideration of varying distributed surface roughness and an appropriate empirical correction derived from nacelle experimental transition location data.
Resumo:
This brief investigates a possible application of the inverse Preisach model in combination with the feedforward and feedback control strategies to control shape memory alloy actuators. In the feedforward control design, a fuzzy-based inverse Preisach model is used to compensate for the hysteresis nonlinearity effect. An extrema input history and a fuzzy inference is utilized to replace the inverse classical Preisach model. This work allows for a reduction in the number of experimental parameters and computation time for the inversion of the classical Preisach model. A proportional-integral-derivative (PID) controller is used as a feedback controller to regulate the error between the desired output and the system output. To demonstrate the effectiveness of the proposed controller, real-time control experiment results are presented.
Resumo:
Shapememoryalloy (SMA) actuators, which have the ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Nonlinearity hysteresis effects existing in SMA actuators present a problem in the motion control of these smart actuators. This paper investigates the control problem of SMA actuators in both simulation and experiment. In the simulation, the numerical Preisachmodel with geometrical interpretation is used for hysteresis modeling of SMA actuators. This model is then incorporated in a closed loop PID control strategy. The optimal values of PID parameters are determined by using geneticalgorithm to minimize the mean squared error between desired output displacement and simulated output. However, the control performance is not good compared with the simulation results when these parameters are applied to the real SMA control since the system is disturbed by unknown factors and changes in the surrounding environment of the system. A further automated readjustment of the PID parameters using fuzzylogic is proposed for compensating the limitation. To demonstrate the effectiveness of the proposed controller, real time control experiment results are presented.