876 resultados para visual analog scale
Resumo:
We examine mid- to late Holocene centennial-scale climate variability in Ireland using proxy data from peatlands, lakes and a speleothem. A high degree of between-record variability is apparent in the proxy data and significant chronological uncertainties are present. However, tephra layers provide a robust tool for correlation and improve the chronological precision of the records. Although we can find no statistically significant coherence in the dataset as a whole, a selection of high-quality peatland water table reconstructions co-vary more than would be expected by chance alone. A locally weighted regression model with bootstrapping can be used to construct a ‘best-estimate’ palaeoclimatic reconstruction from these datasets. Visual comparison and cross-wavelet analysis of peatland water table compilations from Ireland and Northern Britain show that there are some periods of coherence between these records. Some terrestrial palaeoclimatic changes in Ireland appear to coincide with changes in the North Atlantic thermohaline circulation and solar activity. However, these relationships are inconsistent and may be obscured by chronological uncertainties. We conclude by suggesting an agenda for future Holocene climate research in Ireland. ©2013 Elsevier B.V. All rights reserved.
Resumo:
Here we consider the numerical optimization of active surface plasmon polariton (SPP) trench waveguides suited for integration with luminescent polymers for use as highly localized SPP source devices in short-scale communication integrated circuits. The numerical analysis of the SPP modes within trench waveguide systems provides detailed information on the mode field components, effective indices, propagation lengths and mode areas. Such trench waveguide systems offer extremely high confinement with propagation on length scales appropriate to local interconnects, along with high efficiency coupling of dipolar emitters to waveguided plasmonic modes which can be close to 80%. The large Purcell factor exhibited in these structures will further lead to faster modulation capabilities along with an increased quantum yield beneficial for the proposed plasmon-emitting diode, a plasmonic analog of the light-emitting diode. The confinement of studied guided modes is on the order of 50 nm and the delay over the shorter 5 μm length scales will be on the order of 0.1 ps for the slowest propagating modes of the system, and significantly less for the faster modes.
Resumo:
A previous review of research on the practice of offender supervision identified the predominant use of interview-based methodologies and limited use of other research approaches (Robinson and Svensson, 2013). It also found that most research has tended to be locally focussed (i.e. limited to one jurisdiction) with very few comparative studies. This article reports on the application of a visual method in a small-scale comparative study. Practitioners in five European countries participated and took photographs of the places and spaces where offender supervision occurs. The aims of the study were two-fold: firstly to explore the utility of a visual approach in a comparative context; and secondly to provide an initial visual account of the environment in which offender supervision takes place. In this article we address the first of these aims. We describe the application of the method in some depth before addressing its strengths and weaknesses. We conclude that visual methods provide a useful tool for capturing data about the environments in which offender supervision takes place and potentially provide a basis for more normative explorations about the practices of offender supervision in comparative contexts.
Resumo:
Rapid blue- and redshifted excursions (RBEs and RREs) are likely to be the on-disk counterparts of Type II spicules. Recently, heating signatures from RBEs/RREs have been detected in IRIS slit-jaw images dominated by transition region (TR) lines around network patches. Additionally, signatures of Type II spicules have been observed in Atmospheric Imaging Assembly (AIA) diagnostics. The full-disk, ever-present nature of the AIA diagnostics should provide us with sufficient statistics to directly determine how important RBEs and RREs are to the heating of the TR and corona. We find, with high statistical significance, that at least 11% of the low coronal brightenings detected in a quiet-Sun region in He ii 304 Å can be attributed to either RBEs or RREs as observed in Hα, and a 6% match of Fe IX 171 Å detected events to RBEs or RREs with very similar statistics for both types of Hα features. We took a statistical approach that allows for noisy detections in the coronal channels and provides us with a lower, but statistical significant, bound. Further, we consider matches based on overlapping features in both time and space, and find strong visual indications of further correspondence between coronal events and co-evolving but non-overlapping, RBEs and RREs.
Resumo:
Object categorisation is linked to detection, segregation and recognition. In the visual system, these processes are achieved in the ventral \what"and dorsal \where"pathways [3], with bottom-up feature extractions in areas V1, V2, V4 and IT (what) in parallel with top-down attention from PP via MT to V2 and V1 (where). The latter is steered by object templates in memory, i.e. in prefrontal cortex with a what component in PF46v and a where component in PF46d.
Resumo:
Models of visual perception are based on image representations in cortical area V1 and higher areas which contain many cell layers for feature extraction. Basic simple, complex and end-stopped cells provide input for line, edge and keypoint detection. In this paper we present an improved method for multi-scale line/edge detection based on simple and complex cells. We illustrate the line/edge representation for object reconstruction, and we present models for multi-scale face (object) segregation and recognition that can be embedded into feedforward dorsal and ventral data streams (the “what” and “where” subsystems) with feedback streams from higher areas for obtaining translation, rotation and scale invariance.
Resumo:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extraction. Simple, complex and end-stopped cells provide input for line, edge and keypoint detection. Detected events provide a rich, multi-scale object representation, and this representation can be stored in memory in order to identify objects. In this paper, the above context is applied to face recognition. The multi-scale line/edge representation is explored in conjunction with keypoint-based saliency maps for Focus-of-Attention. Recognition rates of up to 96% were achieved by combining frontal and 3/4 views, and recognition was quite robust against partial occlusions.
Resumo:
Lines and edges provide important information for object categorization and recognition. In addition, one brightness model is based on a symbolic interpretation of the cortical multi-scale line/edge representation. In this paper we present an improved scheme for line/edge extraction from simple and complex cells and we illustrate the multi-scale representation. This representation can be used for visual reconstruction, but also for nonphotorealistic rendering. Together with keypoints and a new model of disparity estimation, a 3D wireframe representation of e.g. faces can be obtained in the future.
Resumo:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extractions. Simple, complex and end-stopped cells tuned to different spatial frequencies (scales) and/or orientations provide input for line, edge and keypoint detection. This yields a rich, multi-scale object representation that can be stored in memory in order to identify objects. The multi-scale, keypoint-based saliency maps for Focus-of-Attention can be explored to obtain face detection and normalization, after which face recognition can be achieved using the line/edge representation. In this paper, we focus only on face normalization, showing that multi-scale keypoints can be used to construct canonical representations of faces in memory.
Resumo:
In this paper we present an improved scheme for line and edge detection in cortical area V1, based on responses of simple and complex cells, truly multi-scale with no free parameters. We illustrate the multi-scale representation for visual reconstruction, and show how object segregation can be achieved with coarse-to-finescale groupings. A two-level object categorization scenario is tested in which pre-categorization is based on coarse scales only, and final categorization on coarse plus fine scales. Processing schemes are discussed in the framework of a complete cortical architecture.
Resumo:
Hypercolumns in area V1 contain frequency- and orientation-selective simple and complex cells for line (bar) and edge coding, plus end-stopped cells for key- point (vertex) detection. A single-scale (single-frequency) mathematical model of single and double end-stopped cells on the basis of Gabor filter responses was developed by Heitger et al. (1992 Vision Research 32 963-981). We developed an improved model by stabilising keypoint detection over neighbouring micro- scales.
Resumo:
In this paper we present a brief overview of the processing in the primary visual cortex, the multi-scale line/edge and keypoint representations, and a model of brightness perception. This model, which is being extended from 1D to 2D, is based on a symbolic line and edge interpretation: lines are represented by scaled Gaussians and edges by scaled, Gaussian-windowed error functions. We show that this model, in combination with standard techniques from graphics, provides a very fertile basis for non-photorealistic image rendering.
Resumo:
Computer vision for realtime applications requires tremendous computational power because all images must be processed from the first to the last pixel. Ac tive vision by probing specific objects on the basis of already acquired context may lead to a significant reduction of processing. This idea is based on a few concepts from our visual cortex (Rensink, Visual Cogn. 7, 17-42, 2000): (1) our physical surround can be seen as memory, i.e. there is no need to construct detailed and complete maps, (2) the bandwidth of the what and where systems is limited, i.e. only one object can be probed at any time, and (3) bottom-up, low-level feature extraction is complemented by top-down hypothesis testing, i.e. there is a rapid convergence of activities in dendritic/axonal connections.
Resumo:
Object recognition requires that templates with canonical views are stored in memory. Such templates must somehow be normalised. In this paper we present a novel method for obtaining 2D translation, rotation and size invariance. Cortical simple, complex and end-stopped cells provide multi-scale maps of lines, edges and keypoints. These maps are combined such that objects are characterised. Dynamic routing in neighbouring neural layers allows feature maps of input objects and stored templates to converge. We illustrate the construction of group templates and the invariance method for object categorisation and recognition in the context of a cortical architecture, which can be applied in computer vision.