27 resultados para Techniques: images processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of novel labels on visual processing was investigated across two experiments with infants aged between 9 and 21 months. Infants viewed pairs of images across a series of preferential looking trials. On each trial, one image was novel, and the other image had previously been viewed by the infant. Some infants viewed images in silence; other infants viewed images accompanied by novel labels. The pattern of fixations both across and within trials revealed that infants in the labelling condition took longer to develop a novelty preference than infants in the silent condition. Our findings contrast with prior research by Robinson and Sloutsky (e.g., Robinson & Sloutsky, 2007a; Sloutsky & Robinson, 2008) who found that novel labels did not disrupt visual processing for infants aged over a year. Provided that overall task demands are sufficiently high, it appears that labels can disrupt visual processing for infants during the developmental period of establishing a lexicon. The results suggest that when infants are processing labels and objects, attentional resources are shared across modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A near real-time flood detection algorithm giving a synoptic overview of the extent of flooding in both urban and rural areas, and capable of working during night-time and day-time even if cloud was present, could be a useful tool for operational flood relief management. The paper describes an automatic algorithm using high resolution Synthetic Aperture Radar (SAR) satellite data that builds on existing approaches, including the use of image segmentation techniques prior to object classification to cope with the very large number of pixels in these scenes. Flood detection in urban areas is guided by the flood extent derived in adjacent rural areas. The algorithm assumes that high resolution topographic height data are available for at least the urban areas of the scene, in order that a SAR simulator may be used to estimate areas of radar shadow and layover. The algorithm proved capable of detecting flooding in rural areas using TerraSAR-X with good accuracy, classifying 89% of flooded pixels correctly, with an associated false positive rate of 6%. Of the urban water pixels visible to TerraSAR-X, 75% were correctly detected, with a false positive rate of 24%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 57% and 18% respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal state estimation from given observations of a dynamical system by data assimilation is generally an ill-posed inverse problem. In order to solve the problem, a standard Tikhonov, or L2, regularization is used, based on certain statistical assumptions on the errors in the data. The regularization term constrains the estimate of the state to remain close to a prior estimate. In the presence of model error, this approach does not capture the initial state of the system accurately, as the initial state estimate is derived by minimizing the average error between the model predictions and the observations over a time window. Here we examine an alternative L1 regularization technique that has proved valuable in image processing. We show that for examples of flow with sharp fronts and shocks, the L1 regularization technique performs more accurately than standard L2 regularization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional chemometrics techniques are augmented with algorithms tailored specifically for the de-noising and analysis of femtosecond duration pulse datasets. The new algorithms provide additional insights on sample responses to broadband excitation and multi-moded propagation phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n the past decade, the analysis of data has faced the challenge of dealing with very large and complex datasets and the real-time generation of data. Technologies to store and access these complex and large datasets are in place. However, robust and scalable analysis technologies are needed to extract meaningful information from these datasets. The research field of Information Visualization and Visual Data Analytics addresses this need. Information visualization and data mining are often used complementary to each other. Their common goal is the extraction of meaningful information from complex and possibly large data. However, though data mining focuses on the usage of silicon hardware, visualization techniques also aim to access the powerful image-processing capabilities of the human brain. This article highlights the research on data visualization and visual analytics techniques. Furthermore, we highlight existing visual analytics techniques, systems, and applications including a perspective on the field from the chemical process industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that atmospheric concentrations of carbon dioxide (CO2) (and other greenhouse gases) have increased markedly as a result of human activity since the industrial revolution. It is perhaps less appreciated that natural and managed soils are an important source and sink for atmospheric CO2 and that, primarily as a result of the activities of soil microorganisms, there is a soil-derived respiratory flux of CO2 to the atmosphere that overshadows by tenfold the annual CO2 flux from fossil fuel emissions. Therefore small changes in the soil carbon cycle could have large impacts on atmospheric CO2 concentrations. Here we discuss the role of soil microbes in the global carbon cycle and review the main methods that have been used to identify the microorganisms responsible for the processing of plant photosynthetic carbon inputs to soil. We discuss whether application of these techniques can provide the information required to underpin the management of agro-ecosystems for carbon sequestration and increased agricultural sustainability. We conclude that, although crucial in enabling the identification of plant-derived carbon-utilising microbes, current technologies lack the high-throughput ability to quantitatively apportion carbon use by phylogentic groups and its use efficiency and destination within the microbial metabolome. It is this information that is required to inform rational manipulation of the plant–soil system to favour organisms or physiologies most important for promoting soil carbon storage in agricultural soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques to retrieve reliable images from complicated objects are described, overcoming problems introduced by uneven surfaces, giving enhanced depth resolution and improving image contrast. The techniques are illustrated with application to THz imaging of concealed wall paintings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Anhedonia, the loss of pleasure in usually enjoyable activities, is a central feature of major depressive disorder (MDD). The aim of the present study was to examine whether young people at a familial risk of depression display signs of anticipatory, motivational or consummatory anhedonia, which would indicate that these deficits may be trait markers for MDD. Methods: The study was completed by 22 participants with a family history of depression (FH+) and 21 controls (HC). Anticipatory anhedonia was assessed by asking participants to rate their anticipated liking of pleasant and unpleasant foods which they imagined tasting when cued with images of the foods. Motivational anhedonia was measured by requiring participants to perform key presses to obtain pleasant chocolate taste rewards or to avoid unpleasant apple tastes. Additionally, physical consummatory anhedonia was examined by instructing participants to rate the pleasantness of the acquired tastes. Moreover, social consummatory anhedonia was investigated by asking participants to make preference-based choices between neutral facial expressions, genuine smiles, and polite smiles. Results: It was found that the FH+ group’s anticipated liking of unpleasant foods was significantly lower than that of the control group. By contrast, no group differences in the pleasantness ratings of the actually experienced tastes or in the amount of performed key presses were observed. However, controls preferred genuine smiles over neutral expressions more often than they preferred polite smiles over neutral expressions, while this pattern was not seen in the FH+ group. Conclusion: These findings suggest that FH+ individuals demonstrate an altered anticipatory response to negative stimuli and show signs of social consummatory anhedonia, which may be trait markers for depression.