274 resultados para SIFT,Computer Vision,Python,Object Recognition,Feature Detection,Descriptor Computation
Resumo:
The environment moderates behaviour using a subtle language of ‘affordances’ and ‘behaviour-settings’. Affordances are environmental offerings. They are objects that demand action; a cliff demands a leap and binoculars demand a peek. Behaviour-settings are ‘places;’ spaces encoded with expectations and meanings. Behaviour-settings work the opposite way to affordances; they demand inhibition; an introspective demeanour in a church or when under surveillance. Most affordances and behaviour-settings are designed, and as such, designers are effectively predicting brain reactions. • Affordances are nested within, and moderated by behaviour-settings. Both trigger automatic neural responses (excitation and inhibition). These, for the best part cancel each other out. This balancing enables object recognition and allows choice about what action should be taken (if any). But when excitation exceeds inhibition, instinctive action will automatically commence. In positive circumstances this may mean laughter or a smile. In negative circumstances, fleeing, screaming or other panic responses are likely. People with poor frontal function, due to immaturity (childhood or developmental disorders) or due to hypofrontality (schizophrenia, brain damage or dementia) have a reduced capacity to balance excitatory and inhibitory impulses. For these people, environmental behavioural demands increase with the decline of frontal brain function. • The world around us is not only encoded with symbols and sensory information. Opportunities and restrictions work on a much more primal level. Person/space interactions constantly take place at a molecular scale. Every space we enter has its own special dynamic, where individualism vies for supremacy between the opposing forces of affordance-related excitation and the inhibition intrinsic to behaviour-settings. And in this context, even a small change–the installation of a CCTV camera can turn a circus to a prison. • This paper draws on cutting-edge neurological theory to understand the psychological determinates of the everyday experience of the designed environment.
Resumo:
Movement of tephritid flies underpins their survival, reproduction, and ability to establish in new areas and is thus of importance when designing effective management strategies. Much of the knowledge currently available on tephritid movement throughout landscapes comes from the use of direct or indirect methods that rely on the trapping of individuals. Here, we review published experimental designs and methods from mark-release-recapture (MRR) studies, as well as other methods, that have been used to estimate movement of the four major tephritid pest genera (Bactrocera, Ceratitis, Anastrepha, and Rhagoletis). In doing so, we aim to illustrate the theoretical and practical considerations needed to study tephritid movement. MRR studies make use of traps to directly estimate the distance that tephritid species can move within a generation and to evaluate the ecological and physiological factors that influence dispersal patterns. MRR studies, however, require careful planning to ensure that the results obtained are not biased by the methods employed, including marking methods, trap properties, trap spacing, and spatial extent of the trapping array. Despite these obstacles, MRR remains a powerful tool for determining tephritid movement, with data particularly required for understudied species that affect developing countries. To ensure that future MRR studies are successful, we suggest that site selection be carefully considered and sufficient resources be allocated to achieve optimal spacing and placement of traps in line with the stated aims of each study. An alternative to MRR is to make use of indirect methods for determining movement, or more correctly, gene flow, which have become widely available with the development of molecular tools. Key to these methods is the trapping and sequencing of a suitable number of individuals to represent the genetic diversity of the sampled population and investigate population structuring using nuclear genomic markers or non-recombinant mitochondrial DNA markers. Microsatellites are currently the preferred marker for detecting recent population displacement and provide genetic information that may be used in assignment tests for the direct determination of contemporary movement. Neither MRR nor molecular methods, however, are able to monitor fine-scale movements of individual flies. Recent developments in the miniaturization of electronics offer the tantalising possibility to track individual movements of insects using harmonic radar. Computer vision and radio frequency identification tags may also permit the tracking of fine-scale movements by tephritid flies by automated resampling, although these methods come with the same problems as traditional traps used in MRR studies. Although all methods described in this chapter have limitations, a better understanding of tephritid movement far outweighs the drawbacks of the individual methods because of the need for this information to manage tephritid populations.
Resumo:
State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.
Resumo:
Scene understanding has been investigated from a mainly visual information point of view. Recently depth has been provided an extra wealth of information, allowing more geometric knowledge to fuse into scene understanding. Yet to form a holistic view, especially in robotic applications, one can create even more data by interacting with the world. In fact humans, when growing up, seem to heavily investigate the world around them by haptic exploration. We show an application of haptic exploration on a humanoid robot in cooperation with a learning method for object segmentation. The actions performed consecutively improve the segmentation of objects in the scene.