786 resultados para Modeling Non-Verbal Behaviors Using Machine Learning
Resumo:
Agricultural pests are responsible for millions of dollars in crop losses and management costs every year. In order to implement optimal site-specific treatments and reduce control costs, new methods to accurately monitor and assess pest damage need to be investigated. In this paper we explore the combination of unmanned aerial vehicles (UAV), remote sensing and machine learning techniques as a promising methodology to address this challenge. The deployment of UAVs as a sensor platform is a rapidly growing field of study for biosecurity and precision agriculture applications. In this experiment, a data collection campaign is performed over a sorghum crop severely damaged by white grubs (Coleoptera: Scarabaeidae). The larvae of these scarab beetles feed on the roots of plants, which in turn impairs root exploration of the soil profile. In the field, crop health status could be classified according to three levels: bare soil where plants were decimated, transition zones of reduced plant density and healthy canopy areas. In this study, we describe the UAV platform deployed to collect high-resolution RGB imagery as well as the image processing pipeline implemented to create an orthoimage. An unsupervised machine learning approach is formulated in order to create a meaningful partition of the image into each of the crop levels. The aim of this approach is to simplify the image analysis step by minimizing user input requirements and avoiding the manual data labelling necessary in supervised learning approaches. The implemented algorithm is based on the K-means clustering algorithm. In order to control high-frequency components present in the feature space, a neighbourhood-oriented parameter is introduced by applying Gaussian convolution kernels prior to K-means clustering. The results show the algorithm delivers consistent decision boundaries that classify the field into three clusters, one for each crop health level as shown in Figure 1. The methodology presented in this paper represents a venue for further esearch towards automated crop damage assessments and biosecurity surveillance.
Resumo:
Colour is an essential aspect of our daily life, and still, it is a neglected issue within marketing research. The main reason for studying colours is to understand the impact of colours on consumer behaviour, and thus, colours should be studied when it comes to branding, advertising, packages, interiors, and the clothes of the employees, for example. This was an exploratory study about the impact of colours on packages. The focus was on low-involvement purchasing, where the consumer puts limited effort into the decision-making. The basis was a scenario in which the consumer faces an unpredictable problem needing immediate action. The consumer may be in hurry, which indicate time pressure. The consumer may lack brand preferences, or the preferred brand may be out of stock. The issue is that the choice is to be made at the point of purchase. Further, the purchasing involves product classes where the core products behind the brands are indistinguishable from each other. Three research questions were posed. Two questions were answered by conjoint analysis, i.e. if colours have an impact on decision-making and if a possible impact is related to the product class. 16 hypothetical packages were designed in two product classes within the healthcare, i.e. painkillers and medicine against sore throats. The last research question aimed at detecting how an analysis could be carried out in order to understand the impact of colours. This question was answered by conducting interviews that were analysed by applying laddering method and a semiotics approach. The study found that colours do indeed have an impact on consumer behaviour, this being related to the context, such as product class. The role of colours on packages was found to be threefold: attention, aesthetics, and communication. The study focused on colours as a means of communication, and it proposes that colours convey product, brand, and product class meanings, these meanings having an impact on consumers’ decision-making at the point of purchase. In addition, the study demonstrates how design elements such as colours can be understood by regarding them as non-verbal signs. The study also presents an empirical design, involving quantitative and qualitative techniques that can be used to gain in depth understanding of the impact of design elements on consumer behaviour. Hannele Kauppinen is associated with CERS, the Centre for Relationship Marketing and Service Management of the Swedish School of Economics and Business Administration
Resumo:
Non-orthogonal space-time block codes (STBC) with large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with large dimensions has been a challenge. In this paper, we present a reactive tabu search (RTS) based algorithm for decoding non-orthogonal STBCs from cyclic division algebras (CDA) having largedimensions. Under i.i.d fading and perfect channel state information at the receiver (CSIR), our simulation results show that RTS based decoding of 12 X 12 STBC from CDA and 4-QAM with 288 real dimensions achieves i) 10(-3) uncoded BER at an SNR of just 0.5 dB away from SISO AWGN performance, and ii) a coded BER performance close to within about 5 dB of the theoretical MIMO capacity, using rate-3/4 turbo code at a spectral efficiency of 18 bps/Hz. RTS is shown to achieve near SISO AWGN performance with less number of dimensions than with LAS algorithm (which we reported recently) at some extra complexity than LAS. We also report good BER performance of RTS when i.i.d fading and perfect CSIR assumptions are relaxed by considering a spatially correlated MIMO channel model, and by using a training based iterative RTS decoding/channel estimation scheme.
Resumo:
Non-orthogonal space-time block codes (STBC) from cyclic division algebras (CDA) are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with hundreds of dimensions has been a challenge. In this paper, we present a probabilistic data association (PDA) based algorithm for decoding non-orthogonal STBCs with large dimensions. Our simulation results show that the proposed PDA-based algorithm achieves near SISO AWGN uncoded BER as well as near-capacity coded BER (within 5 dB of the theoretical capacity) for large non-orthogonal STBCs from CDA. We study the effect of spatial correlation on the BER, and show that the performance loss due to spatial correlation can be alleviated by providing more receive spatial dimensions. We report good BER performance when a training-based iterative decoding/channel estimation is used (instead of assuming perfect channel knowledge) in channels with large coherence times. A comparison of the performances of the PDA algorithm and the likelihood ascent search (LAS) algorithm (reported in our recent work) is also presented.
Resumo:
Digital human modeling (DHM) involves modeling of structure, form and functional capabilities of human users for ergonomics simulation. This paper presents application of geometric procedures for investigating the characteristics of human visual capabilities which are particularly important in the context mentioned above. Using the cone of unrestricted directions through the pupil on a tessellated head model as the geometric interpretation of the clinical field-of-view (FoV), the results obtained are experimentally validated. Estimating the pupil movement for a given gaze direction using Listing's Law, FoVs are re-computed. Significant variation of the FoV is observed with the variation in gaze direction. A novel cube-grid representation, which integrated the unit-cube representation of directions and the enhanced slice representation has been introduced for fast and exact point classification for point visibility analysis for a given FoV. Computation of containment frequency of every grid-cell for a given set of FoVs enabled determination of percentile-based FoV contours for estimating the visual performance of a given population. This is a new concept which makes visibility analysis more meaningful from ergonomics point-of-view. The algorithms are fast enough to support interactive analysis of reasonably complex scenes on a typical desktop computer. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We describe a method to fabricate high-density biological microarrays using lithographic patterning of polyelectrolyte multi layers formed by spin assisted electrostatic layer-by-layer assembly. Proteins or DNA can be immobilized on the polyelectrolyte patterns via electrostatic attachment leading to functional microarrays. As the immobilization is done using electrostatically assembled polyelectrolyte anchor, this process is substrate independent and is fully compatible with a standard semiconductor fabrication process flow. Moreover, the electrostatic assembly of the anchor layer is a fast process with reaction saturation times of the order of a few minutes unlike covalent schemes that typically require hours to reach saturation. The substrate independent nature of this technique is demonstrated by functionalizing glass slides as well as regular transparency sheets using the same procedure. Using a model protein assay, we demonstrate that the non-covalent immobilization scheme described here has competitive performance compared to conventional covalent immobilization schemes described in literature. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A new method of modeling partial delamination in composite beams is proposed and implemented using the finite element method. Homogenized cross-sectional stiffness of the delaminated beam is obtained by the proposed analytical technique, including extension-bending, extension-twist and torsion-bending coupling terms, and hence can be used with an existing finite element method. A two noded C1 type Timoshenko beam element with 4 degrees of freedom per node for dynamic analysis of beams is implemented. The results for different delamination scenarios and beams subjected to different boundary conditions are validated with available experimental results in the literature and/or with the 3D finite element simulation using COMSOL. Results of the first torsional mode frequency for the partially delaminated beam are validated with the COMSOL results. The key point of the proposed model is that partial delamination in beams can be analyzed using a beam model, rather than using 3D or plate models. (c) 2013 Elsevier B.V. All rights reserved.
Resumo:
The aim in this paper is to allocate the `sleep time' of the individual sensors in an intrusion detection application so that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We propose two novel reinforcement learning (RL) based algorithms that attempt to minimize a certain long-run average cost objective. Both our algorithms incorporate feature-based representations to handle the curse of dimensionality associated with the underlying partially-observable Markov decision process (POMDP). Further, the feature selection scheme used in our algorithms intelligently manages the energy cost and tracking cost factors, which in turn assists the search for the optimal sleeping policy. We also extend these algorithms to a setting where the intruder's mobility model is not known by incorporating a stochastic iterative scheme for estimating the mobility model. The simulation results on a synthetic 2-d network setting are encouraging.
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.