850 resultados para Text-Based Image Retrieval
Resumo:
Retrieving a subset of items can cause the forgetting of other items, a phenomenon referred to as retrieval-induced forgetting. According to some theorists, retrieval-induced forgetting is the consequence of an inhibitory mechanism that acts to reduce the accessibility of non-target items that interfere with the retrieval of target items. Other theorists argue that inhibition is unnecessary to account for retrieval-induced forgetting, contending instead that the phenomenon can be best explained by non-inhibitory mechanisms, such as strength-based competition or blocking. The current paper provides the first major meta-analysis of retrieval-induced forgetting, conducted with the primary purpose of quantitatively evaluating the multitude of findings that have been used to contrast these two theoretical viewpoints. The results largely supported inhibition accounts, but also provided some challenging evidence, with the nature of the results often varying as a function of how retrieval-induced forgetting was assessed. Implications for further research and theory development are discussed.
Resumo:
We present an intuitive geometric approach for analysing the structure and fragility of T1-weighted structural MRI scans of human brains. Apart from computing characteristics like the surface area and volume of regions of the brain that consist of highly active voxels, we also employ Network Theory in order to test how close these regions are to breaking apart. This analysis is used in an attempt to automatically classify subjects into three categories: Alzheimer’s disease, mild cognitive impairment and healthy controls, for the CADDementia Challenge.
Resumo:
Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.
Resumo:
We develop a method to derive aerosol properties over land surfaces using combined spectral and angular information, such as available from ESA Sentinel-3 mission, to be launched in 2015. A method of estimating aerosol optical depth (AOD) using only angular retrieval has previously been demonstrated on data from the ENVISAT and PROBA-1 satellite instruments, and is extended here to the synergistic spectral and angular sampling of Sentinel-3. The method aims to improve the estimation of AOD, and to explore the estimation of fine mode fraction (FMF) and single scattering albedo (SSA) over land surfaces by inversion of a coupled surface/atmosphere radiative transfer model. The surface model includes a general physical model of angular and spectral surface reflectance. An iterative process is used to determine the optimum value of the aerosol properties providing the best fit of the corrected reflectance values to the physical model. The method is tested using hyperspectral, multi-angle Compact High Resolution Imaging Spectrometer (CHRIS) images. The values obtained from these CHRIS observations are validated using ground-based sun photometer measurements. Results from 22 image sets using the synergistic retrieval and improved aerosol models show an RMSE of 0.06 in AOD, reduced to 0.03 over vegetated targets.
Resumo:
Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).
Resumo:
Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points.
Resumo:
Datasets containing information to locate and identify water bodies have been generated from data locating static-water-bodies with resolution of about 300 m (1/360 deg) recently released by the Land Cover Climate Change Initiative (LC CCI) of the European Space Agency. The LC CCI water-bodies dataset has been obtained from multi-temporal metrics based on time series of the backscattered intensity recorded by ASAR on Envisat between 2005 and 2010. The new derived datasets provide coherently: distance to land, distance to water, water-body identifiers and lake-centre locations. The water-body identifier dataset locates the water bodies assigning the identifiers of the Global Lakes and Wetlands Database (GLWD), and lake centres are defined for in-land waters for which GLWD IDs were determined. The new datasets therefore link recent lake/reservoir/wetlands extent to the GLWD, together with a set of coordinates which locates unambiguously the water bodies in the database. Information on distance-to-land for each water cell and the distance-to-water for each land cell has many potential applications in remote sensing, where the applicability of geophysical retrieval algorithms may be affected by the presence of water or land within a satellite field of view (image pixel). During the generation and validation of the datasets some limitations of the GLWD database and of the LC CCI water-bodies mask have been found. Some examples of the inaccuracies/limitations are presented and discussed. Temporal change in water-body extent is common. Future versions of the LC CCI dataset are planned to represent temporal variation, and this will permit these derived datasets to be updated.
Resumo:
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.
Resumo:
This paper presents the use of a multiprocessor architecture for the performance improvement of tomographic image reconstruction. Image reconstruction in computed tomography (CT) is an intensive task for single-processor systems. We investigate the filtered image reconstruction suitability based on DSPs organized for parallel processing and its comparison with the Message Passing Interface (MPI) library. The experimental results show that the speedups observed for both platforms were increased in the same direction of the image resolution. In addition, the execution time to communication time ratios (Rt/Rc) as a function of the sample size have shown a narrow variation for the DSP platform in comparison with the MPI platform, which indicates its better performance for parallel image reconstruction.
Resumo:
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
Resumo:
This report presents an algorithm for locating the cut points for and separatingvertically attached traffic signs in Sweden. This algorithm provides severaladvanced digital image processing features: binary image which representsvisual object and its complex rectangle background with number one and zerorespectively, improved cross correlation which shows the similarity of 2Dobjects and filters traffic sign candidates, simplified shape decompositionwhich smoothes contour of visual object iteratively in order to reduce whitenoises, flipping point detection which locates black noises candidates, chasmfilling algorithm which eliminates black noises, determines the final cut pointsand separates originally attached traffic signs into individual ones. At each step,the mediate results as well as the efficiency in practice would be presented toshow the advantages and disadvantages of the developed algorithm. Thisreport concentrates on contour-based recognition of Swedish traffic signs. Thegeneral shapes cover upward triangle, downward triangle, circle, rectangle andoctagon. At last, a demonstration program would be presented to show howthe algorithm works in real-time environment.
Resumo:
http://digitalcommons.colby.edu/atlasofmaine2009/1028/thumbnail.jpg
Resumo:
Accurate long-term monitoring of total ozone is one of the most important requirements for identifying possible natural or anthropogenic changes in the composition of the stratosphere. For this purpose, the NDACC (Network for the Detection of Atmospheric Composition Change) UV-visible Working Group has made recommendations for improving and homogenizing the retrieval of total ozone columns from twilight zenith-sky visible spectrometers. These instruments, deployed all over the world in about 35 stations, allow measuring total ozone twice daily with limited sensitivity to stratospheric temperature and cloud cover. The NDACC recommendations address both the DOAS spectral parameters and the calculation of air mass factors (AMF) needed for the conversion of O-3 slant column densities into vertical column amounts. The most important improvement is the use of O-3 AMF look-up tables calculated using the TOMS V8 (TV8) O-3 profile climatology, that allows accounting for the dependence of the O-3 AMF on the seasonal and latitudinal variations of the O-3 vertical distribution. To investigate their impact on the retrieved ozone columns, the recommendations have been applied to measurements from the NDACC/SAOZ (Systeme d'Analyse par Observation Zenithale) network. The revised SAOZ ozone data from eight stations deployed at all latitudes have been compared to TOMS, GOMEGDP4, SCIAMACHY-TOSOMI, SCIAMACHY-OL3, OMI-TOMS, and OMI-DOAS satellite overpass observations, as well as to those of collocated Dobson and Brewer instruments at Observatoire de Haute Provence (44 degrees N, 5.5 degrees E) and Sodankyla (67 degrees N, 27 degrees E), respectively. A significantly better agreement is obtained between SAOZ and correlative reference ground-based measurements after applying the new O-3 AMFs. However, systematic seasonal differences between SAOZ and satellite instruments remain. These are shown to mainly originate from (i) a possible problem in the satellite retrieval algorithms in dealing with the temperature dependence of the ozone cross-sections in the UV and the solar zenith angle (SZA) dependence, (ii) zonal modulations and seasonal variations of tropospheric ozone columns not accounted for in the TV8 profile climatology, and (iii) uncertainty on the stratospheric ozone profiles at high latitude in the winter in the TV8 climatology. For those measurements mostly sensitive to stratospheric temperature like TOMS, OMI-TOMS, Dobson and Brewer, or to SZA like SCIAMACHY-TOSOMI, the application of temperature and SZA corrections results in the almost complete removal of the seasonal difference with SAOZ, improving significantly the consistency between all ground-based and satellite total ozone observations.
Resumo:
In some applications with case-based system, the attributes available for indexing are better described as linguistic variables instead of receiving numerical treatment. In these applications, the concept of fuzzy hypercube can be applied to give a geometrical interpretation of similarities among cases. This paper presents an approach that uses geometrical properties of fuzzy hypercube space to make indexing and retrieval processes of cases.