23 resultados para toolbox

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matlab, uno de los paquetes de software matemático más utilizados actualmente en el mundo de la docencia y de la investigación, dispone de entre sus muchas herramientas una específica para el procesado digital de imágenes. Esta toolbox de procesado digital de imágenes está formada por un conjunto de funciones adicionales que amplían la capacidad del entorno numérico de Matlab y permiten realizar un gran número de operaciones de procesado digital de imágenes directamente a través del programa principal. Sin embargo, pese a que MATLAB cuenta con un buen apartado de ayuda tanto online como dentro del propio programa principal, la bibliografía disponible en castellano es muy limitada y en el caso particular de la toolbox de procesado digital de imágenes es prácticamente nula y altamente especializada, lo que requiere que los usuarios tengan una sólida formación en matemáticas y en procesado digital de imágenes. Partiendo de una labor de análisis de todas las funciones y posibilidades disponibles en la herramienta del programa, el proyecto clasificará, resumirá y explicará cada una de ellas a nivel de usuario, definiendo todas las variables de entrada y salida posibles, describiendo las tareas más habituales en las que se emplea cada función, comparando resultados y proporcionando ejemplos aclaratorios que ayuden a entender su uso y aplicación. Además, se introducirá al lector en el uso general de Matlab explicando las operaciones esenciales del programa, y se aclararán los conceptos más avanzados de la toolbox para que no sea necesaria una extensa formación previa. De este modo, cualquier alumno o profesor que se quiera iniciar en el procesado digital de imágenes con Matlab dispondrá de un documento que le servirá tanto para consultar y entender el funcionamiento de cualquier función de la toolbox como para implementar las operaciones más recurrentes dentro del procesado digital de imágenes. Matlab, one of the most used numerical computing environments in the world of research and teaching, has among its many tools a specific one for digital image processing. This digital image processing toolbox consists of a set of additional functions that extend the power of the digital environment of Matlab and allow to execute a large number of operations of digital image processing directly through the main program. However, despite the fact that MATLAB has a good help section both online and within the main program, the available bibliography is very limited in Castilian and is negligible and highly specialized in the particular case of the image processing toolbox, being necessary a strong background in mathematics and digital image processing. Starting from an analysis of all the available functions and possibilities in the program tool, the document will classify, summarize and explain each function at user level, defining all input and output variables possible, describing common tasks in which each feature is used, comparing results and providing illustrative examples to help understand its use and application. In addition, the reader will be introduced in the general use of Matlab explaining the essential operations within the program and clarifying the most advanced concepts of the toolbox so that an extensive prior formation will not be necessary. Thus, any student or teacher who wants to start digital image processing with Matlab will have a document that will serve to check and understand the operation of any function of the toolbox and also to implement the most recurrent operations in digital image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Subtraction of Ictal SPECT Co-registered to MRI (SISCOM) is an imaging technique used to localize the epileptogenic focus in patients with intractable partial epilepsy. The aim of this study was to determine the accuracy of registration algorithms involved in SISCOM analysis using FocusDET, a new user-friendly application. To this end, Monte Carlo simulation was employed to generate realistic SPECT studies. Simulated sinograms were reconstructed by using the Filtered BackProjection (FBP) algorithm and an Ordered Subsets Expectation Maximization (OSEM) reconstruction method that included compensation for all degradations. Registration errors in SPECT-SPECT and SPECT-MRI registration were evaluated by comparing the theoretical and actual transforms. Patient studies with well-localized epilepsy were also included in the registration assessment. Global registration errors including SPECT-SPECT and SPECT-MRI registration errors were less than 1.2 mm on average, exceeding the voxel size (3.32 mm) of SPECT studies in no case. Although images reconstructed using OSEM led to lower registration errors than images reconstructed with FBP, differences after using OSEM or FBP in reconstruction were less than 0.2 mm on average. This indicates that correction for degradations does not play a major role in the SISCOM process, thereby facilitating the application of the methodology in centers where OSEM is not implemented with correction of all degradations. These findings together with those obtained by clinicians from patients via MRI, interictal and ictal SPECT and video-EEG, show that FocusDET is a robust application for performing SISCOM analysis in clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A toolbox is a set of procedures taking advantage of the computing power and graphical capacities of a CAS. With these procedures the students can solve math problems, apply mathematics to engineering or simply reinforce the learning of certain mathematical concepts. From the point of view of their construction, we can consider two types of toolboxes: (i) the closed box, built by the teacher, in which the utility files are provided to the students together with the respective tutorials and several worksheets with proposed exercises and problems,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Fractal Image Informatics toolbox (Oleschko et al., 2008 a; Torres-Argüelles et al., 2010) was applied to extract, classify and model the topological structure and dynamics of surface roughness in two highly eroded catchments of Mexico. Both areas are affected by gully erosion (Sidorchuk, 2005) and characterized by avalanche-like matter transport. Five contrasting morphological patterns were distinguished across the slope of the bare eroded surface of Faeozem (Queretaro State) while only one (apparently independent on the slope) roughness pattern was documented for Andosol (Michoacan State). We called these patterns ?the roughness clusters? and compared them in terms of metrizability, continuity, compactness, topological connectedness (global and local) and invariance, separability, and degree of ramification (Weyl, 1937). All mentioned topological measurands were correlated with the variance, skewness and kurtosis of the gray-level distribution of digital images. The morphology0 spatial dynamics of roughness clusters was measured and mapped with high precision in terms of fractal descriptors. The Hurst exponent was especially suitable to distinguish between the structure of ?turtle shell? and ?ramification? patterns (sediment producing zone A of the slope); as well as ?honeycomb? (sediment transport zone B) and ?dinosaur steps? and ?corals? (sediment deposition zone C) roughness clusters. Some other structural attributes of studied patterns were also statistically different and correlated with the variance, skewness and kurtosis of gray distribution of multiscale digital images. The scale invariance of classified roughness patterns was documented inside the range of five image resolutions. We conjectured that the geometrization of erosion patterns in terms of roughness clustering might benefit the most semi-quantitative models developed for erosion and sediment yield assessments (de Vente and Poesen, 2005).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El audio multicanal ha avanzado a pasos agigantados en los últimos años, y no solo en las técnicas de reproducción, sino que en las de capitación también. Por eso en este proyecto se encuentran ambas cosas: un array microfónico, EigenMike32 de MH Acoustics, y un sistema de reproducción con tecnología Wave Field Synthesis, instalado Iosono en la Jade Höchscule Oldenburg. Para enlazar estos dos puntos de la cadena de audio se proponen dos tipos distintos de codificación: la reproducción de la toma horizontal del EigenMike32; y el 3er orden de Ambisonics (High Order Ambisonics, HOA), una técnica de codificación basada en Armónicos Esféricos mediante la cual se simula el campo acústico en vez de simular las distintas fuentes. Ambas se desarrollaron en el entorno Matlab y apoyadas por la colección de scripts de Isophonics llamada Spatial Audio Matlab Toolbox. Para probar éstas se llevaron a cabo una serie de test en los que se las comparó con las grabaciones realizadas a la vez con un Dummy Head, a la que se supone el método más aproximado a nuestro modo de escucha. Estas pruebas incluían otras grabaciones hechas con un Doble MS de Schoeps que se explican en el proyecto “Sally”. La forma de realizar éstas fue, una batería de 4 audios repetida 4 veces para cada una de las situaciones garbadas (una conversación, una clase, una calle y un comedor universitario). Los resultados fueron inesperados, ya que la codificación del tercer orden de HOA quedo por debajo de la valoración Buena, posiblemente debido a la introducción de material hecho para un array tridimensional dentro de uno de 2 dimensiones. Por el otro lado, la codificación que consistía en extraer los micrófonos del plano horizontal se mantuvo en el nivel de Buena en todas las situaciones. Se concluye que HOA debe seguir siendo probado con mayores conocimientos sobre Armónicos Esféricos; mientras que el otro codificador, mucho más sencillo, puede ser usado para situaciones sin mucha complejidad en cuanto a espacialidad. In the last years the multichannel audio has increased in leaps and bounds and not only in the playback techniques, but also in the recording ones. That is the reason of both things being in this project: a microphone array, EigenMike32 from MH Acoustics; and a playback system with Wave Field Synthesis technology, installed by Iosono in Jade Höchscule Oldenburg. To link these two points of the audio chain, 2 different kinds of codification are proposed: the reproduction of the EigenMike32´s horizontal take, and the Ambisonics´ third order (High Order Ambisonics, HOA), a codification technique based in Spherical Harmonics through which the acoustic field is simulated instead of the different sound sources. Both have been developed inside Matlab´s environment and supported by the Isophonics´ scripts collection called Spatial Audio Matlab Toolbox. To test these, a serial of tests were made in which they were compared with recordings made at the time by a Dummy Head, which is supposed to be the closest method to our hearing way. These tests included other recording and codifications made by a Double MS (DMS) from Schoeps which are explained in the project named “3D audio rendering through Ambisonics techniques: from multi-microphone recordings (DMS Schoeps) to a WFS system, through Matlab”. The way to perform the tests was, a collection made of 4 audios repeated 4 times for each recorded situation (a chat, a class, a street and college canteen or Mensa). The results were unexpected, because the HOA´s third order stood under the Well valuation, possibly caused by introducing material made for a tridimensional array inside one made only by 2 dimensions. On the other hand, the codification that consisted of extracting the horizontal plane microphones kept the Well valuation in all the situations. It is concluded that HOA should keep being tested with larger knowledge about Spherical Harmonics; while the other coder, quite simpler, can be used for situations without a lot of complexity with regards to spatiality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak? digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ?373 ?m of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (?1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ponencia sobre el efecto de los cultivos captura en los sistemas agrarios europeos y la aplicación de estrategias enmarcadas en N-TOOLBOX para la reducción de la contaminación por lavado de nitratos dentro de la XI Reunión RUENA (Red de Uso Eficiente del Nitrógeno en Agricultura).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, and the introduction of concepts such as Generalized (GS) and Phase synchronization (PS). This increase in the number of approaches to tackle the existence of the so-called functional (FC) and effective connectivity (EC) (Friston 1994) between two, (or among many) neural networks, along with their mathematical complexity, makes it desirable to arrange them into a unified toolbox, thereby allowing neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solar drying is one of the important processes used for extending the shelf life of agricultural products. Regarding consumer requirements, solar drying should be more suitable in terms of curtailing total drying time and preserving product quality. Therefore, the objective of this study was to develop a fuzzy logic-based control system, which performs a ?human-operator-like? control approach through using the previously developed low-cost model-based sensors. Fuzzy logic toolbox of MatLab and Borland C++ Builder tool were utilized to develop a required control system. An experimental solar dryer, constructed by CONA SOLAR (Austria) was used during the development of the control system. Sensirion sensors were used to characterize the drying air at different positions in the dryer, and also the smart sensor SMART-1 was applied to be able to include the rate of wood water extraction into the control system (the difference of absolute humidity of the air between the outlet and the inlet of solar dryer is considered by SMART-1 to be the extracted water). A comprehensive test over a 3 week period for different fuzzy control models has been performed, and data, obtained from these experiments, were analyzed. Findings from this study would suggest that the developed fuzzy logic-based control system is able to tackle difficulties, related to the control of solar dryer process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents the main elements of a project entitled ICT-Emissions that aims at developing a novel methodology to evaluate the impact of ICT-related measures on mobility, vehicle energy consumption and CO2 emissions of vehicle fleets at the local scale, in order to promote the wider application of the most appropriate ICT measures. The proposed methodology combines traffic and emission modelling at micro and macro scales. These will be linked with interfaces and submodules which will be specifically designed and developed. A number of sources are available to the consortium to obtain the necessary input data. Also, experimental campaigns are offered to fill in gaps of information in traffic and emission patterns. The application of the methodology will be demonstrated using commercially available software. However, the methodology is developed in such a way as to enable its implementation by a variety of emission and traffic models. Particular emphasis is given to (a) the correct estimation of driver behaviour, as a result of traffic-related ICT measures, (b) the coverage of a large number of current vehicle technologies, including ICT systems, and (c) near future technologies such as hybrid, plug-in hybrids, and electric vehicles. The innovative combination of traffic, driver, and emission models produces a versatile toolbox that can simulate the impact on energy and CO2 of infrastructure measures (traffic management, dynamic traffic signs, etc.), driver assistance systems and ecosolutions (speed/cruise control, start/stop systems, etc.) or a combination of measures (cooperative systems).The methodology is validated by application in the Turin area and its capacity is further demonstrated by application in real world conditions in Madrid and Rome.