24 resultados para Visual sense
Resumo:
This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.
Resumo:
Plants produce volatile organic compounds (VOCs) in a variety of contexts that include response to abiotic and biotic stresses, attraction of pollinators and parasitoids, and repulsion of herbivores. Some of these VOCs may also exhibit diel variation in emission. In Ficus racemosa, we examined variation in VOCs released by fig syconia throughout syconium development and between day and night. Syconia are globular enclosed inflorescences that serve as developing nurseries for pollinating and parasitic fig wasps. Syconia are attacked by gallers early in their development, serviced by pollinators in mid phase, and are attractive to parasitoids in response to the development of gallers at later stages. VOC bouquets of the different development phases of the syconium were distinctive, as were their day and night VOC profiles. VOCs such as alpha-muurolene were characteristic of the pollen-receptive diurnal phase, and may serve to attract the diurnally-active pollinating wasps. Diel patterns of release of volatiles could not be correlated with their predicted volatility as determined by Henry's law constants at ambient temperatures. Therefore, factors other than Henry's law constant such as stomatal conductance or VOC synthesis must explain diel variation in VOC emission. A novel use of weighted gene co-expression network analysis (WGCNA) on the volatilome resulted in seven distinct modules of co-emitted VOCs that could be interpreted on the basis of syconium ecology. Some modules were characterized by the response of fig syconia to early galling by parasitic wasps and consisted largely of green leaf volatiles (GLVs). Other modules, that could be characterized by a combination of syconia response to oviposition and tissue feeding by larvae of herbivorous galler pollinators as well as of parasitized wasps, consisted largely of putative herbivore-induced plant volatiles (HIPVs). We demonstrated the usefulness of WGCNA analysis of the volatilome in making sense of the scents produced by the syconia at different stages and diel phases of their development.
Resumo:
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search.
Resumo:
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features ( intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and coactivation models ( based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features-in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search.
Resumo:
G-Quadruplexes occupy important regulatory regions in the genome. DNA G-quadruplexes in the promoter regions and RNA quadruplexes in the UTRs (untranslated regions) have been individually studied and variously implicated at different regulatory levels of gene expression. However, the formation of G-quadruplexes in the sense and antisense strands and their corresponding roles in gene regulation have not been studied in much detail. In the present study, we have elucidated the effect of strand asymmetry in this context. Using biophysical methods, we have demonstrated the formation of stable G-quadruplex structure in vitro using CD and UV melting. Additionally, ITC was employed to demonstrate that a previously reported selective G-quadruplex ligand was able to bind and stabilize the G-quadruplex in the present sequence. Further, we have shown using reporter constructs that although the DNA G-quadruplex in either strand can reduce translation efficiency, transcriptional regulation differs when G-quadruplex is present in the sense or antisense strand. We demonstrate that the G-quadruplex motif in the antisense strand substantially inhibits transcription, while when in the sense strand, it does not affect transcription, although it does ultimately reduce translation. Further, it is also shown that the G-quadruplex stabilizing ligand can enhance this asymmetric transcription regulation as a result of the increased stabilization of the G-quadruplex.
Resumo:
Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. Applications such as content-aware retargeting of videos to different aspect ratios while preserving informative regions and smart insertion of dialog (closed-caption text) into the video stream can significantly be improved using the predicted ROIs. We propose an interactive human-in-the-loop framework to model eye movements and predict visual saliency into yet-unseen frames. Eye tracking and video content are used to model visual attention in a manner that accounts for important eye-gaze characteristics such as temporal discontinuities due to sudden eye movements, noise, and behavioral artifacts. A novel statistical-and algorithm-based method gaze buffering is proposed for eye-gaze analysis and its fusion with content-based features. Our robust saliency prediction is instantiated for two challenging and exciting applications. The first application alters video aspect ratios on-the-fly using content-aware video retargeting, thus making them suitable for a variety of display sizes. The second application dynamically localizes active speakers and places dialog captions on-the-fly in the video stream. Our method ensures that dialogs are faithful to active speaker locations and do not interfere with salient content in the video stream. Our framework naturally accommodates personalisation of the application to suit biases and preferences of individual users.
Resumo:
Designing a robust algorithm for visual object tracking has been a challenging task since many years. There are trackers in the literature that are reasonably accurate for many tracking scenarios but most of them are computationally expensive. This narrows down their applicability as many tracking applications demand real time response. In this paper, we present a tracker based on random ferns. Tracking is posed as a classification problem and classification is done using ferns. We used ferns as they rely on binary features and are extremely fast at both training and classification as compared to other classification algorithms. Our experiments show that the proposed tracker performs well on some of the most challenging tracking datasets and executes much faster than one of the state-of-the-art trackers, without much difference in tracking accuracy.
Resumo:
The nonstructural protein NSs, encoded by the S RNA of groundnut bud necrosis virus (GBNV) (genus Tospovirus, family Bunyaviridae) has earlier been shown to possess nucleic-acid-stimulated NTPase and 50 a phosphatase activity. ATP hydrolysis is an essential function of a true helicase. Therefore, NSs was tested for DNA helicase activity. The results demonstrated that GBNV NSs possesses bidirectional DNA helicase activity. An alanine mutation in the Walker A motif (K189A rNSs) decreased DNA helicase activity substantially, whereas a mutation in the Walker B motif resulted in a marginal decrease in this activity. The parallel loss of the helicase and ATPase activity in the K189A mutant confirms that NSs acts as a non-canonical DNA helicase. Furthermore, both the wild-type and K189A NSs could function as RNA silencing suppressors, demonstrating that the suppressor activity of NSs is independent of its helicase or ATPase activity. This is the first report of a true helicase from a negative-sense RNA virus.
Resumo:
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.