65 resultados para zoo map task
Resumo:
We have estimated a metallicity map of the Large Magellanic Cloud (LMC) using the Magellanic Cloud Photometric Survey (MCPS) and Optical Gravitational Lensing Experiment (OGLE III) photometric data. This is a first of its kind map of metallicity up to a radius of 4 degrees-5 degrees, derived using photometric data and calibrated using spectroscopic data of Red Giant Branch (RGB) stars. We identify the RGB in the V, (V - I) colour-magnitude diagrams of small subregions of varying sizes in both data sets. We use the slope of the RGB as an indicator of the average metallicity of a subregion, and calibrate the RGB slope to metallicity using spectroscopic data for field and cluster red giants in selected subregions. The average metallicity of the LMC is found to be Fe/H] = -0.37 dex (sigmaFe/H] = 0.12) from MCPS data, and Fe/H] = -0.39 dex (sigmaFe/H] = 0.10) from OGLE III data. The bar is found to be the most metal-rich region of the LMC. Both the data sets suggest a shallow radial metallicity gradient up to a radius of 4 kpc (-0.049 +/- 0.002 dex kpc(-1) to -0.066 +/- 0.006 dex kpc(-1)). Subregions in which the mean metallicity differs from the surrounding areas do not appear to correlate with previously known features; spectroscopic studies are required in order to assess their physical significance.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We revisit the problem of temporal self organization using activity diffusion based on the neural gas (NGAS) algorithm. Using a potential function formulation motivated by a spatio-temporal metric, we derive an adaptation rule for dynamic vector quantization of data. Simulations results show that our algorithm learns the input distribution and time correlation much faster compared to the static neural gas method over the same data sequence under similar training conditions.
Resumo:
Charge-transfer (CT) excitations are essential for photovoltaic phenomena in organic solar cells. Owing to the complexity of molecular geometries and orbital coupling, a detailed analysis and spatial visualisation of CT processes can be challenging. In this paper, a new detail-oriented visualisation scheme, the particle-hole map (PHM), is applied and explained for the purpose of spatial analysis of excitations in organic molecules. The PHM can be obtained from the output of a time-dependent density-functional theory calculation with negligible additional computational cost, and provides a useful physical picture for understanding the origins and destinations of electrons and holes during an excitation process. As an example, we consider intramolecular CT excitations in Diketopyrrolopyrrole-based molecules, and relate our findings to experimental results.