4 resultados para symbolic spatial information
em Universidad de Alicante
Resumo:
We consider two intrinsic sources of noise in ultra-sensitive magnetic field sensors based on MgO magnetic tunnel junctions, coming both from 25 Mg nuclear spins (I = 5/2, 10% natural abundance) and S = 1 Mg-vacancies. While nuclear spins induce noise peaked in the MHz frequency range, the vacancies noise peaks in the GHz range. We find that the nuclear noise in submicron devices has a similar magnitude than the 1/f noise, while the vacancy-induced noise dominates in the GHz range. Interestingly, the noise spectrum under a finite magnetic field gradient may provide spatial information about the spins in the MgO layer.
Resumo:
In some cases external morphology is not sufficient to discern between populations of a species, as occurs in the dung beetle Canthon humectus hidalgoensis Bates; and much less to determine phenotypic distances between them. FTIR-ATR spectroscopy show several advantages over other identification techniques (e.g. morphological, genetic, and cuticular hydrocarbons analysis) due to the non-invasive manner of the sample preparation, the relative speed of sample analysis and the low-cost of this technology. The infrared spectrum obtained is recognized to give a unique ‘fingerprint’ because vibrational spectra are specific and unique to the molecular nature of the sample. In our study, results showed that proteins, amino acids and aromatic ethers of insect exocuticle have promising discriminative power to discern between different populations of C. h. hidalgoensis. Furthermore, the correlation between geographic distances between populations and the chemical distances obtained by proteins + amino acids + aromatic ethers was statistically significant, showing that the spectral and spatial information available of the taxa together with appropriated chemometric methods may help to a better understanding of the identity, structure, dynamics and diversity of insect populations.
Resumo:
Camera traps have become a widely used technique for conducting biological inventories, generating a large number of database records of great interest. The main aim of this paper is to describe a new free and open source software (FOSS), developed to facilitate the management of camera-trapped data which originated from a protected Mediterranean area (SE Spain). In the last decade, some other useful alternatives have been proposed, but ours focuses especially on a collaborative undertaking and on the importance of spatial information underpinning common camera trap studies. This FOSS application, namely, “Camera Trap Manager” (CTM), has been designed to expedite the processing of pictures on the .NET platform. CTM has a very intuitive user interface, automatic extraction of some image metadata (date, time, moon phase, location, temperature, atmospheric pressure, among others), analytical (Geographical Information Systems, statistics, charts, among others), and reporting capabilities (ESRI Shapefiles, Microsoft Excel Spreadsheets, PDF reports, among others). Using this application, we have achieved a very simple management, fast analysis, and a significant reduction of costs. While we were able to classify an average of 55 pictures per hour manually, CTM has made it possible to process over 1000 photographs per hour, consequently retrieving a greater amount of data.
Resumo:
The retina is a very complex neural structure, which performs spatial, temporal, and chromatic processing on visual information and converts it into a compact ‘digital’ format composed of neural impulses. This paper presents a new compiler-based framework able to describe, simulate and validate custom retina models. The framework is compatible with the most usual neural recording and analysis tools, taking advantage of the interoperability with these kinds of applications. Furthermore it is possible to compile the code to generate accelerated versions of the visual processing models compatible with COTS microprocessors, FPGAs or GPUs. The whole system represents an ongoing work to design and develop a functional visual neuroprosthesis. Several case studies are described to assess the effectiveness and usefulness of the framework.