960 resultados para Processing image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a ba re Mazic Pellic Vertisols ix of them in May 13/2011 and the rest in May 19 /2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak? camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents box counting dimension D BC) and interface fractal dimension (D) In addition, three pre-fractal scaling coefficients were determinate at maximum resolution: total number of boxes intercepting the foreground pattern (A), fractal lacunarity (?1) and Shannon entropy S1). For all the images processed the spatial filter 9x9 was the optimum based on entropy, cluster and histogram criteria. Thresholds for each color were selected based on bimodal histograms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we review some earlier distributed algorithms developed by the authors and collaborators, which are based on two different approaches, namely, distributed moment estimation and distributed stochastic approximations. We show applications of these algorithms on image compression, linear classification and stochastic optimal control. In all cases, the benefit of cooperation is clear: even when the nodes have access to small portions of the data, by exchanging their estimates, they achieve the same performance as that of a centralized architecture, which would gather all the data from all the nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision extracts useful information from images. Reconstructing the three-dimensional structure of our environment and recognizing the objects that populate it are among the most important functions of our visual system. Computer vision researchers study the computational principles of vision and aim at designing algorithms that reproduce these functions. Vision is difficult: the same scene may give rise to very different images depending on illumination and viewpoint. Typically, an astronomical number of hypotheses exist that in principle have to be analyzed to infer a correct scene description. Moreover, image information might be extracted at different levels of spatial and logical resolution dependent on the image processing task. Knowledge of the world allows the visual system to limit the amount of ambiguity and to greatly simplify visual computations. We discuss how simple properties of the world are captured by the Gestalt rules of grouping, how the visual system may learn and organize models of objects for recognition, and how one may control the complexity of the description that the visual system computes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rapid progress in effective methods to image brain functions has revolutionized neuroscience. It is now possible to study noninvasively in humans neural processes that were previously only accessible in experimental animals and in brain-injured patients. In this endeavor, positron emission tomography has been the leader, but the superconducting quantum interference device-based magnetoencephalography (MEG) is gaining a firm role, too. With the advent of instruments covering the whole scalp, MEG, typically with 5-mm spatial and 1-ms temporal resolution, allows neuroscientists to track cortical functions accurately in time and space. We present five representative examples of recent MEG studies in our laboratory that demonstrate the usefulness of whole-head magnetoencephalography in investigations of spatiotemporal dynamics of cortical signal processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss some main image processing techniques in order to propose a classification based upon the output these methods provide. Because despite a particular image analysis technique can be supervised or unsupervised, and can allow or not the existence of fuzzy information at some stage, each technique has been usually designed to focus on a specific objective, and their outputs are in fact different according to each objective. Thus, they are in fact different methods. But due to the essential relationship between them they are quite often confused. In particular, this paper pursues a clarification of the differences between image segmentation and edge detection, among other image processing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Póster presentado en SPIE Photonics Europe, Brussels, 16-19 April 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement of concrete strain through non-invasive methods is of great importance in civil engineering and structural analysis. Traditional methods use laser speckle and high quality cameras that may result too expensive for many applications. Here we present a method for measuring concrete deformations with a standard reflex camera and image processing for tracking objects in the concretes surface. Two different approaches are presented here. In the first one, on-purpose objects are drawn on the surface, while on the second one we track small defects on the surface due to air bubbles in the hardening process. The method has been tested on a concrete sample under several loading/unloading cycles. A stop-motion sequence of the process has been captured and analyzed. Results have been successfully compared with the values given by a strain gauge. Accuracy of our methods in tracking objects is below 8 μm, in the order of more expensive commercial devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying cloud interference in satellite-derived data is a critical step toward developing useful remotely sensed products. Most MODIS land products use a combination of the MODIS (MOD35) cloud mask and the 'internal' cloud mask of the surface reflectance product (MOD09) to mask clouds, but there has been little discussion of how these masks differ globally. We calculated global mean cloud frequency for both products, for 2009, and found that inflated proportions of observations were flagged as cloudy in the Collection 5 MOD35 product. These erroneously categorized areas were spatially and environmentally non-random and usually occurred over high-albedo land-cover types (such as grassland and savanna) in several regions around the world. Additionally, we found that spatial variability in the processing path applied in the Collection 5 MOD35 algorithm affects the likelihood of a cloudy observation by up to 20% in some areas. These factors result in abrupt transitions in recorded cloud frequency across landcover and processing-path boundaries impeding their use for fine-scale spatially contiguous modeling applications. We show that together, these artifacts have resulted in significantly decreased and spatially biased data availability for Collection 5 MOD35-derived composite MODIS land products such as land surface temperature (MOD11) and net primary productivity (MOD17). Finally, we compare our results to mean cloud frequency in the new Collection 6 MOD35 product, and find that landcover artifacts have been reduced but not eliminated. Collection 6 thus increases data availability for some regions and land cover types in MOD35-derived products but practitioners need to consider how the remaining artifacts might affect their analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On cover: C00-1469-145.