310 resultados para Object selection
Resumo:
In this report an artificial neural network (ANN) based automated emergency landing site selection system for unmanned aerial vehicle (UAV) and general aviation (GA) is described. The system aims increase safety of UAV operation by emulating pilot decision making in emergency landing scenarios using an ANN to select a safe landing site from available candidates. The strength of an ANN to model complex input relationships makes it a perfect system to handle the multicriteria decision making (MCDM) process of emergency landing site selection. The ANN operates by identifying the more favorable of two landing sites when provided with an input vector derived from both landing site's parameters, the aircraft's current state and wind measurements. The system consists of a feed forward ANN, a pre-processor class which produces ANN input vectors and a class in charge of creating a ranking of landing site candidates using the ANN. The system was successfully implemented in C++ using the FANN C++ library and ROS. Results obtained from ANN training and simulations using randomly generated landing sites by a site detection simulator data verify the feasibility of an ANN based automated emergency landing site selection system.
Resumo:
Travel speed is one of the most critical parameters for road safety; the evidence suggests that increased vehicle speed is associated with higher crash risk and injury severity. Both naturalistic and simulator studies have reported that drivers distracted by a mobile phone select a lower driving speed. Speed decrements have been argued to be a risk compensatory behaviour of distracted drivers. Nonetheless, the extent and circumstances of the speed change among distracted drivers are still not known very well. As such, the primary objective of this study was to investigate patterns of speed variation in relation to contextual factors and distraction. Using the CARRS-Q high-fidelity Advanced Driving Simulator, the speed selection behaviour of 32 drivers aged 18-26 years was examined in two phone conditions: baseline (no phone conversation) and handheld phone operation. The simulator driving route contained five different types of road traffic complexities, including one road section with a horizontal S curve, one horizontal S curve with adjacent traffic, one straight segment of suburban road without traffic, one straight segment of suburban road with traffic interactions, and one road segment in a city environment. Speed deviations from the posted speed limit were analysed using Ward’s Hierarchical Clustering method to identify the effects of road traffic environment and cognitive distraction. The speed deviations along curved road sections formed two different clusters for the two phone conditions, implying that distracted drivers adopt a different strategy for selecting driving speed in a complex driving situation. In particular, distracted drivers selected a lower speed while driving along a horizontal curve. The speed deviation along the city road segment and other straight road segments grouped into a different cluster, and the deviations were not significantly different across phone conditions, suggesting a negligible effect of distraction on speed selection along these road sections. Future research should focus on developing a risk compensation model to explain the relationship between road traffic complexity and distraction.
Resumo:
Spatial data analysis has become more and more important in the studies of ecology and economics during the last decade. One focus of spatial data analysis is how to select predictors, variance functions and correlation functions. However, in general, the true covariance function is unknown and the working covariance structure is often misspecified. In this paper, our target is to find a good strategy to identify the best model from the candidate set using model selection criteria. This paper is to evaluate the ability of some information criteria (corrected Akaike information criterion, Bayesian information criterion (BIC) and residual information criterion (RIC)) for choosing the optimal model when the working correlation function, the working variance function and the working mean function are correct or misspecified. Simulations are carried out for small to moderate sample sizes. Four candidate covariance functions (exponential, Gaussian, Matern and rational quadratic) are used in simulation studies. With the summary in simulation results, we find that the misspecified working correlation structure can still capture some spatial correlation information in model fitting. When the sample size is large enough, BIC and RIC perform well even if the the working covariance is misspecified. Moreover, the performance of these information criteria is related to the average level of model fitting which can be indicated by the average adjusted R square ( [GRAPHICS] ), and overall RIC performs well.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.
Resumo:
A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.
Resumo:
Critical, loud, highly discursive and polarised; the #auspol hashtag represents a space, an event and a network for politically involved individuals to engage in and with Australian politics and speak to, at and about a variety of involved stakeholders. Contributors declare, debate and often berate each other’s opinions about current Australian politics. The hashtag itself is an important material object and engagement event involved within this performance of political participation. As a long-standing institution in the Twittersphere, and one studied by the authors and their colleagues since its early beginnings (Bruns and Burgess, 2011; Bruns and Stieglitz, 2012; 2013), the #auspol hashtag provides a potent case study through which to explore the discursive and affective dimensions of a hashtag public. This chapter that engages both empirically and theoretically with the use of this particular hashtag on Twitter to provide a qualitatively illustrated case in point for thinking about the long-term use of political hashtags as engagement events.
Resumo:
Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.
Resumo:
Efficiency of analysis using generalized estimation equations is enhanced when intracluster correlation structure is accurately modeled. We compare two existing criteria (a quasi-likelihood information criterion, and the Rotnitzky-Jewell criterion) to identify the true correlation structure via simulations with Gaussian or binomial response, covariates varying at cluster or observation level, and exchangeable or AR(l) intracluster correlation structure. Rotnitzky and Jewell's approach performs better when the true intracluster correlation structure is exchangeable, while the quasi-likelihood criteria performs better for an AR(l) structure.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
We carried out a discriminant analysis with identity by descent (IBD) at each marker as inputs, and the sib pair type (affected-affected versus affected-unaffected) as the output. Using simple logistic regression for this discriminant analysis, we illustrate the importance of comparing models with different number of parameters. Such model comparisons are best carried out using either the Akaike information criterion (AIC) or the Bayesian information criterion (BIC). When AIC (or BIC) stepwise variable selection was applied to the German Asthma data set, a group of markers were selected which provide the best fit to the data (assuming an additive effect). Interestingly, these 25-26 markers were not identical to those with the highest (in magnitude) single-locus lod scores.
Resumo:
Work ability describes employees' capability to carry out their work with respect to physical and psychological job demands. This study investigated direct and interactive effects of age, job control, and the use of successful aging strategies called selection, optimization, and compensation (SOC) in predicting work ability. We assessed SOC strategies and job control by using employee self-reports, and we measured employees' work ability using supervisor ratings. Data collected from 173 health-care employees showed that job control was positively associated with work ability. Additionally, we found a three-way interaction effect of age, job control, and use of SOC strategies on work ability. Specifically, the negative relationship between age and work ability was weakest for employees with high job control and high use of SOC strategies. These results suggest that the use of successful aging strategies and enhanced control at work are conducive to maintaining the work ability of aging employees. We discuss theoretical and practical implications regarding the beneficial role of the use of SOC strategies utilized by older employees and enhanced contextual resources at work for aging employees.
Resumo:
This study investigated within-person relationships between daily problem solving demands, selection, optimization, and compensation (SOC) strategy use, job satisfaction, and fatigue at work. Based on conservation of resources theory, it was hypothesized that high SOC strategy use boosts the positive relationship between problem solving demands and job satisfaction, and buffers the positive relationship between problem solving demands and fatigue. Using a daily diary study design, data were collected from 64 administrative employees who completed a general questionnaire and two daily online questionnaires over four work days. Multilevel analyses showed that problem solving demands were positively related to fatigue, but unrelated to job satisfaction. SOC strategy use was positively related to job satisfaction, but unrelated to fatigue. A buffering effect of high SOC strategy use on the demands-fatigue relationship was found, but no booster effect on the demands-satisfaction relationship. The results suggest that high SOC strategy use is a resource that protects employees from the negative effects of high problem solving demands.
Resumo:
The concept of focus on opportunities describes how many new goals, options, and possibilities employees believe to have in their personal future at work. This study investigated the specific and shared effects of age, job complexity, and the use of successful aging strategies called selection, optimization, and compensation (SOC) in predicting focus on opportunities. Results of data collected from 133 employees of one company (mean age = 38 years, SD = 13, range 16–65 years) showed that age was negatively, and job complexity and use of SOC strategies were positively related to focus on opportunities. In addition, older employees in high-complexity jobs and older employees in low-complexity jobs with high use of SOC strategies were better able to maintain a focus on opportunities than older employees in low-complexity jobs with low use of SOC strategies.