43 resultados para error correction model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The World Health Organization estimates that 13 million children aged 5-15 years worldwide are visually impaired from uncorrected refractive error. School vision screening programs can identify and treat or refer children with refractive error. We concentrate on the findings of various screening studies and attempt to identify key factors in the success and sustainability of such programs in the developing world. We reviewed original and review articles describing children's vision and refractive error screening programs published in English and listed in PubMed, Medline OVID, Google Scholar, and Oxford University Electronic Resources databases. Data were abstracted on study objective, design, setting, participants, and outcomes, including accuracy of screening, quality of refractive services, barriers to uptake, impact on quality of life, and cost-effectiveness of programs. Inadequately corrected refractive error is an important global cause of visual impairment in childhood. School-based vision screening carried out by teachers and other ancillary personnel may be an effective means of detecting affected children and improving their visual function with spectacles. The need for services and potential impact of school-based programs varies widely between areas, depending on prevalence of refractive error and competing conditions and rates of school attendance. Barriers to acceptance of services include the cost and quality of available refractive care and mistaken beliefs that glasses will harm children's eyes. Further research is needed in areas such as the cost-effectiveness of different screening approaches and impact of education to promote acceptance of spectacle-wear. School vision programs should be integrated into comprehensive efforts to promote healthy children and their families.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a model commonly used in dynamic traffic assignment the link travel time for a vehicle entering a link at time t is taken as a function of the number of vehicles on the link at time t. In an alternative recently introduced model, the travel time for a vehicle entering a link at time t is taken as a function of an estimate of the flow in the immediate neighbourhood of the vehicle, averaged over the time the vehicle is traversing the link. Here we compare the solutions obtained from these two models when applied to various inflow profiles. We also divide the link into segments, apply each model sequentially to the segments and again compare the results. As the number of segments is increased, the discretisation refined to the continuous limit, the solutions from the two models converge to the same solution, which is the solution of the Lighthill, Whitham, Richards (LWR) model for traffic flow. We illustrate the results for different travel time functions and patterns of inflows to the link. In the numerical examples the solutions from the second of the two models are closer to the limit solutions. We also show that the models converge even when the link segments are not homogeneous, and introduce a correction scheme in the second model to compensate for an approximation error, hence improving the approximation to the LWR model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the IEEE 802.11 MAC layer protocol, there are different trade-off points between the number of nodes competing for the medium and the network capacity provided to them. There is also a trade-off between the wireless channel condition during the transmission period and the energy consumption of the nodes. Current approaches at modeling energy consumption in 802.11 based networks do not consider the influence of the channel condition on all types of frames (control and data) in the WLAN. Nor do they consider the effect on the different MAC and PHY schemes that can occur in 802.11 networks. In this paper, we investigate energy consumption corresponding to the number of competing nodes in IEEE 802.11's MAC and PHY layers in error-prone wireless channel conditions, and present a new energy consumption model. Analysis of the power consumed by each type of MAC and PHY over different bit error rates shows that the parameters in these layers play a critical role in determining the overall energy consumption of the ad-hoc network. The goal of this research is not only to compare the energy consumption using exact formulae in saturated IEEE 802.11-based DCF networks under varying numbers of competing nodes, but also, as the results show, to demonstrate that channel errors have a significant impact on the energy consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Newton–Raphson solution scheme with a stress point algorithm is presented for the implementation of an elastic–viscoplastic soilmodel in a finite element program. Viscoplastic strain rates are calculated using the stress and volumetric states of the soil. Sub-incrementsof time are defined for each iterative calculation of elastic–viscoplastic stress changes so that their sum adds up to the time incrementfor the load step. This carefully defined ‘iterative time’ ensures that the correct amount of viscoplastic straining is accumulated overthe applied load step. The algorithms and assumptions required to implement the solution scheme are provided. Verification of the solutionscheme is achieved by using it to analyze typical boundary value problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of computer-based devices for music control has created a need to study how spectators understand new performance technologies and practices. As a part of a larger project examining how interactions with technology can be communicated to spectators, we present a model of a spectator's understanding of error by a performer. This model is broadly applicable throughout HCI, as interactions with technology are increasingly public and spectatorship is becoming more common.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluates the implementation of Menter's gamma-Re-theta Transition Model within the CFX12 solver for turbulent transition prediction on a natural laminar flow nacelle. Some challenges associated with this type of modeling have been identified. The computational fluid dynamics transitional flow simulation results are presented for a series of cruise cases with freestream Mach numbers ranging from 0.8 to 0.88, angles of attack from 2 to 0 degrees, and mass flow ratios from 0.60 to 0.75. These were validated with a series of wind-tunnel tests on the nacelle by comparing the predicted and experimental surface pressure distributions and transition locations. A selection of the validation cases are presented in this paper. In all cases, computational fluid dynamics simulations agreed reasonably well with the experiments. The results indicate that Menter's gamma-Re-theta Transition Model is capable of predicting laminar boundary-layer transition to turbulence on a nacelle. Nonetheless, some limitations exist in both the Menter's gamma-Re-theta Transition Model and in the implementation of the computational fluid dynamics model. The implementation of a more comprehensive experimental correlation in Menter's gamma-Re-theta Transition Model, preferably the ones from nacelle experiments, including the effects of compressibility and streamline curvature, is necessary for an accurate transitional flow simulation on a nacelle. In addition, improvements to the computational fluid dynamics model are also suggested, including the consideration of varying distributed surface roughness and an appropriate empirical correction derived from nacelle experimental transition location data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This brief investigates a possible application of the inverse Preisach model in combination with the feedforward and feedback control strategies to control shape memory alloy actuators. In the feedforward control design, a fuzzy-based inverse Preisach model is used to compensate for the hysteresis nonlinearity effect. An extrema input history and a fuzzy inference is utilized to replace the inverse classical Preisach model. This work allows for a reduction in the number of experimental parameters and computation time for the inversion of the classical Preisach model. A proportional-integral-derivative (PID) controller is used as a feedback controller to regulate the error between the desired output and the system output. To demonstrate the effectiveness of the proposed controller, real-time control experiment results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shapememoryalloy (SMA) actuators, which have the ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Nonlinearity hysteresis effects existing in SMA actuators present a problem in the motion control of these smart actuators. This paper investigates the control problem of SMA actuators in both simulation and experiment. In the simulation, the numerical Preisachmodel with geometrical interpretation is used for hysteresis modeling of SMA actuators. This model is then incorporated in a closed loop PID control strategy. The optimal values of PID parameters are determined by using geneticalgorithm to minimize the mean squared error between desired output displacement and simulated output. However, the control performance is not good compared with the simulation results when these parameters are applied to the real SMA control since the system is disturbed by unknown factors and changes in the surrounding environment of the system. A further automated readjustment of the PID parameters using fuzzylogic is proposed for compensating the limitation. To demonstrate the effectiveness of the proposed controller, real time control experiment results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GC-MS data on veterinary drug residues in bovine urine are used for controlling the illegal practice of fattening cattle. According to current detection criteria, peak patterns of preferably four ions should agree within 10 or 20% from a corresponding standard pattern. These criteria are rigid, rather arbitrary and do not match daily practice. A new model, based on multivariate modeling of log peak abundance ratios, provides a theoretical basis for the identification of analytes and optimizes the balance between the avoidance of false positives and false negatives. The performance of the model is demonstrated on data provided by five laboratories, each supplying GC-MS measurements on the detection of clenbuterol, dienestrol and 19 beta-nortestosterone in urine. The proposed model shows a better performance than confirmation by using the current criteria and provides a statistical basis for inspection criteria in terms of error probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a two-level 3D human pose tracking method for a specific action captured by several cameras. The generation of pose estimates relies on fitting a 3D articulated model on a Visual Hull generated from the input images. First, an initial pose estimate is constrained by a low dimensional manifold learnt by Temporal Laplacian Eigenmaps. Then, an improved global pose is calculated by refining individual limb poses. The validation of our method uses a public standard dataset and demonstrates its accurate and computational efficiency. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of accurate structural/thermal numerical models of complex systems, such as aircraft fuselage barrels, is often limited and determined by the smallest scales that need to be modelled. The development of reduced order models of the smallest scales and consequently their integration with higher level models can be a way to minimise the bottle neck present, while still having efficient, robust and accurate numerical models. In this paper a methodology on how to develop compact thermal fluid models (CTFMs) for compartments where mixed convection regimes are present is demonstrated. Detailed numerical simulations (CFD) have been developed for an aircraft crown compartment and validated against experimental data obtained from a 1:1 scale compartment rig. The crown compartment is defined as the confined area between the upper fuselage and the passenger cabin in a single aisle commercial aircraft. CFD results were utilised to extract average quantities (temperature and heat fluxes) and characteristic parameters (heat transfer coefficients) to generate CTFMs. The CTFMs have then been compared with the results obtained from the detailed models showing average errors for temperature predictions lower than 5%. This error can be deemed acceptable when compared to the nominal experimental error associated with the thermocouple measurements.

The CTFMs methodology developed allows to generate accurate reduced order models where accuracy is restricted to the region of Boundary Conditions applied. This limitation arises from the sensitivity of the internal flow structures to the applied boundary condition set. CTFMs thus generated can be then integrated in complex numerical modelling of whole fuselage sections.

Further steps in the development of an exhaustive methodology would be the implementation of a logic ruled based approach to extract directly from the CFD simulations numbers and positions of the nodes for the CTFM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article the multibody simulation software package MADYMO for analysing and optimizing occupant safety design was used to model crash tests for Normal Containment barriers in accordance with EN 1317. The verification process was carried out by simulating a TB31 and a TB32 crash test performed on vertical portable concrete barriers and by comparing the numerical results to those obtained experimentally. The same modelling approach was applied to both tests to evaluate the predictive capacity of the modelling at two different impact speeds. A sensitivity analysis of the vehicle stiffness was also carried out. The capacity to predict all of the principal EN1317 criteria was assessed for the first time: the acceleration severity index, the theoretical head impact velocity, the barrier working width and the vehicle exit box. Results showed a maximum error of 6% for the acceleration severity index and 21% for theoretical head impact velocity for the numerical simulation in comparison to the recorded data. The exit box position was predicted with a maximum error of 4°. For the working width, a large percentage difference was observed for test TB31 due to the small absolute value of the barrier deflection but the results were well within the limit value from the standard for both tests. The sensitivity analysis showed the robustness of the modelling with respect to contact stiffness increase of ±20% and ±40%. This is the first multibody model of portable concrete barriers that can reproduce not only the acceleration severity index but all the test criteria of EN 1317 and is therefore a valuable tool for new product development and for injury biomechanics research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.