410 resultados para adaptive operator selection
Resumo:
Changes at work are often accompanied with the threat of, or actual, resource loss. Through an experiment, we investigated the detrimental effect of the threat of resource loss on adaptive task performance. Self-regulation (i.e., task focus and emotion control) was hypothesized to buffer the negative relationship between the threat of resource loss and adaptive task performance. Adaptation was conceptualized as relearning after a change in task execution rules. Threat of resource loss was manipulated for 100 participants undertaking an air traffic control task. Using discontinuous growth curve modeling, 2 kinds of adaptation—transition adaptation and reacquisition adaptation—were differentiated. The results showed that individuals who experienced the threat of resource loss had a stronger drop in performance (less transition adaptation) and a subsequent slower recovery (less reacquisition adaptation) compared with the control group who experienced no threat. Emotion control (but not task focus) moderated the relationship between the threat of resource loss and transition adaptation. In this respect, individuals who felt threatened but regulated their emotions performed better immediately after the task change (but not later on) compared with those individuals who felt threatened and did not regulate their emotions as well. However, later on, relearning (reacquisition adaptation) under the threat of resource loss was facilitated when individuals concentrated on the task at hand.
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
A gyrostabiliser control system and method for stabilising marine vessel motion based on precession information only. The control system employs an Automatic Gain Control (AGC) precession controller (60). This system operates with a gain factor that is always being gradually minimized so as to let the gyro flywheel (12) develop as much precession as possible - the higher the precession, the higher the roll stabilising moment. This continuous gain change provides adaptation to changes in sea state and sailing conditions. The system effectively predicts the likelihood of maximum precession being reached. Should this event be detected, then the gain is rapidly increased so as to provide a breaking precession torque. Once the event has passed, the system again attempts to gradually decrease the gain.
Resumo:
Travel speed is one of the most critical parameters for road safety; the evidence suggests that increased vehicle speed is associated with higher crash risk and injury severity. Both naturalistic and simulator studies have reported that drivers distracted by a mobile phone select a lower driving speed. Speed decrements have been argued to be a risk compensatory behaviour of distracted drivers. Nonetheless, the extent and circumstances of the speed change among distracted drivers are still not known very well. As such, the primary objective of this study was to investigate patterns of speed variation in relation to contextual factors and distraction. Using the CARRS-Q high-fidelity Advanced Driving Simulator, the speed selection behaviour of 32 drivers aged 18-26 years was examined in two phone conditions: baseline (no phone conversation) and handheld phone operation. The simulator driving route contained five different types of road traffic complexities, including one road section with a horizontal S curve, one horizontal S curve with adjacent traffic, one straight segment of suburban road without traffic, one straight segment of suburban road with traffic interactions, and one road segment in a city environment. Speed deviations from the posted speed limit were analysed using Ward’s Hierarchical Clustering method to identify the effects of road traffic environment and cognitive distraction. The speed deviations along curved road sections formed two different clusters for the two phone conditions, implying that distracted drivers adopt a different strategy for selecting driving speed in a complex driving situation. In particular, distracted drivers selected a lower speed while driving along a horizontal curve. The speed deviation along the city road segment and other straight road segments grouped into a different cluster, and the deviations were not significantly different across phone conditions, suggesting a negligible effect of distraction on speed selection along these road sections. Future research should focus on developing a risk compensation model to explain the relationship between road traffic complexity and distraction.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
A flexible and simple Bayesian decision-theoretic design for dose-finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose-toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one-step-look-ahead (OSLA), which selects the best-so-far dose. A more complicated rule, such as the two-step-look-ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two-step-look-ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage.
Resumo:
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.
Resumo:
Efficiency of analysis using generalized estimation equations is enhanced when intracluster correlation structure is accurately modeled. We compare two existing criteria (a quasi-likelihood information criterion, and the Rotnitzky-Jewell criterion) to identify the true correlation structure via simulations with Gaussian or binomial response, covariates varying at cluster or observation level, and exchangeable or AR(l) intracluster correlation structure. Rotnitzky and Jewell's approach performs better when the true intracluster correlation structure is exchangeable, while the quasi-likelihood criteria performs better for an AR(l) structure.
Resumo:
Fatigue of the steel in rails continues to be of major concern to heavy haul track owners despite careful selection and maintenance of rails. The persistence of fatigue is due in part to the erroneous assumption that the maximum loads on, and stresses in, the rails are predictable. Recent analysis of extensive wheel impact detector data from a number of heavy haul tracks has shown that the most damaging forces are in fact randomly distributed with time and location and can be much greater than generally expected. Large- scale Monte-Carlo simulations have been used to identify rail stresses caused by actual, measured distributions of wheel-rail forces on heavy haul tracks. The simulations show that fatigue failure of the rail foot can occur in situations which would be overlooked by traditional analyses. The most serious of these situations are those where track is accessed by multiple operators and in situations where there is a mix of heavy haul, general freight and/or passenger traffic. The least serious are those where the track is carrying single-operator-owned heavy haul unit trains. The paper shows how using the nominal maximum axle load of passing traffic, which is the key issue in traditional analyses, is insufficient and must be augmented with consideration of important operational factors. Ignoring such factors can be costly.
Resumo:
Adaptive behaviour is a crucial area of assessment for individuals with Autism Spectrum Disorder (ASD). This study examined the adaptive behaviour profile of 77 young children with ASD using the Vineland-II, and analysed factors associated with adaptive functioning. Consistent with previous research with the original Vineland a distinct autism profile of Vineland-II age equivalent scores, but not standard scores, was found. Highest scores were in motor skills and lowest scores were in socialisation. The addition of the Autism Diagnostic Observation Schedule (ADOS) calibrated severity score did not contribute significant variance to Vineland-II scores beyond that accounted for by age and nonverbal ability. Limitations, future directions, and implications are discussed.
Resumo:
In this paper, the trajectory tracking control of an autonomous underwater vehicle (AUVs) in six-degrees-of-freedom (6-DOFs) is addressed. It is assumed that the system parameters are unknown and the vehicle is underactuated. An adaptive controller is proposed, based on Lyapunov׳s direct method and the back-stepping technique, which interestingly guarantees robustness against parameter uncertainties. The desired trajectory can be any sufficiently smooth bounded curve parameterized by time even if consist of straight line. In contrast with the majority of research in this field, the likelihood of actuators׳ saturation is considered and another adaptive controller is designed to overcome this problem, in which control signals are bounded using saturation functions. The nonlinear adaptive control scheme yields asymptotic convergence of the vehicle to the reference trajectory, in the presence of parametric uncertainties. The stability of the presented control laws is proved in the sense of Lyapunov theory and Barbalat׳s lemma. Efficiency of presented controller using saturation functions is verified through comparing numerical simulations of both controllers.