49 resultados para Automatic Gridding of microarray images
Resumo:
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter ?. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.
Resumo:
In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales están dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrándose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos métodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automática offline para construir algoritmos eficientes.
Resumo:
Important physical and biological processes in soil-plant-microbial systems are dominated by the geometry of soil pore space, and a correct model of this geometry is critical for understanding them. We analyze the geometry of soil pore space with the X-ray computed tomography (CT) of intact soil columns. We present here some preliminary results of our investigation on Minkowski functionals of parallel sets to characterize soil structure. We also show how the evolution of Minkowski morphological measurements of parallel sets may help to characterize the influence of conventional tillage and permanent cover crop of resident vegetation on soil structure in a Spanish Mediterranean vineyard.
Resumo:
The deformation and damage mechanisms of carbon fiber-reinforced epoxy laminates deformed in shear were studied by means of X-ray computed tomography. In particular, the evolution of matrix cracking, interply delamination and fiber rotation was ascertained as a function of the applied strain. In order to provide quantitative information, an algorithm was developed to automatically determine the crack density and the fiber orientation from the tomograms. The investigation provided new insights about the complex interaction between the different damage mechanisms (i.e. matrix cracking and interply delamination) as a function of the applied strain, ply thickness and ply location within the laminate as well as quantitative data about the evolution of matrix cracking and fiber rotation during deformation
Resumo:
This article presents a new automatic evaluation for on-line graphics, its application and the numerous advantages achieved applying this developed correcting method. The software application developed by the Innovation in Education Group “E4”, from the Technical University of Madrid, is oriented for the online self-assessment of the graphic drawings that students carry out as continuous training. The adaptation to the European Higher Educational Area is an important opportunity to research about the possibilities of on-line education assessment. In this way, a new software tool has been developed for continuous self-testing by undergraduates. Using this software it is possible to evaluate the graphical answer of the students. Thus, the drawings made on-line by students are automatically corrected according to the geometry (straight lines, sloping lines or second order curves) and by sizes (depending on the specific values which define the graphics).
Resumo:
Nowadays integrated circuit reliability is challenged by both variability and working conditions. Environmental radiation has become a major issue when ensuring the circuit correct behavior. The required radiation and later analysis performed to the circuit boards is both fund and time expensive. The lack of tools which support pre-manufacturing radiation hardness analysis hinders circuit designers tasks. This paper describes an extensively customizable simulation tool for the characterization of radiation effects on electronic systems. The proposed tool can produce an in depth analysis of a complete circuit in almost any kind of radiation environment in affordable computation times.
Resumo:
Automatic segmentation using univariate and multivariate techniques provides more objective and efficient segmentations of the river systems (Alber & Piégay, 2011) and can be complementary to the expert criteria traditionally used (Brenden et al., 2008) INTEREST: A powerful tool to objectively segment the continuity of rivers, which is required for diagnosing problems associated to human impacts OBJECTIVE: To evaluate the potentiality of univariate and multivariate methods in the assessment of river adjustments produced by flow regulation
Resumo:
Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration
Resumo:
This work is part of the project CAMEVA for the development of an expert system aimed at the automatic identification of ores [1, 2]. It relies on the measure of their reflectance values, R, on digital images. Software for calibration, acquisition and analysis of the multispectral data was designed by AITEMIN [3]; the research was also assessed by H.J. Bernhardt and E. Pirard [1].
Resumo:
The application of thematic maps obtained through the classification of remote images needs the obtained products with an optimal accuracy. The registered images from the airplanes display a very satisfactory spatial resolution, but the classical methods of thematic classification not always give better results than when the registered data from satellite are used. In order to improve these results of classification, in this work, the LIDAR sensor data from first return (Light Detection And Ranging) registered simultaneously with the spectral sensor data from airborne are jointly used. The final results of the thematic classification of the scene object of study have been obtained, quantified and discussed with and without LIDAR data, after applying different methods: Maximum Likehood Classification, Support Vector Machine with four different functions kernel and Isodata clustering algorithm (ML, SVM-L, SVM-P, SVM-RBF, SVM-S, Isodata). The best results are obtained for SVM with Sigmoide kernel. These allow the correlation with others different physical parameters with great interest like Manning hydraulic coefficient, for their incorporation in a GIS and their application in hydraulic modeling.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
In this work we propose an image acquisition and processing methodology (framework) developed for performance in-field grapes and leaves detection and quantification, based on a six step methodology: 1) image segmentation through Fuzzy C-Means with Gustafson Kessel (FCM-GK) clustering; 2) obtaining of FCM-GK outputs (centroids) for acting as seeding for K-Means clustering; 3) Identification of the clusters generated by K-Means using a Support Vector Machine (SVM) classifier. 4) Performance of morphological operations over the grapes and leaves clusters in order to fill holes and to eliminate small pixels clusters; 5)Creation of a mosaic image by Scale-Invariant Feature Transform (SIFT) in order to avoid overlapping between images; 6) Calculation of the areas of leaves and grapes and finding of the centroids in the grape bunches. Image data are collected using a colour camera fixed to a mobile platform. This platform was developed to give a stabilized surface to guarantee that the images were acquired parallel to de vineyard rows. In this way, the platform avoids the distortion of the images that lead to poor estimation of the areas. Our preliminary results are promissory, although they still have shown that it is necessary to implement a camera stabilization system to avoid undesired camera movements, and also a parallel processing procedure in order to speed up the mosaicking process.
Resumo:
A framework for the automatic parallelization of (constraint) logic programs is proposed and proved correct. Intuitively, the parallelization process replaces conjunctions of literals with parallel expressions. Such expressions trigger at run-time the exploitation of restricted, goal-level, independent and-parallelism. The parallelization process performs two steps. The first one builds a conditional dependency graph (which can be implified using compile-time analysis information), while the second transforms the resulting graph into linear conditional expressions, the parallel expressions of the &-Prolog language. Several heuristic algorithms for the latter ("annotation") process are proposed and proved correct. Algorithms are also given which determine if there is any loss of parallelism in the linearization process with respect to a proposed notion of maximal parallelism. Finally, a system is presented which implements the proposed approach. The performance of the different annotation algorithms is compared experimentally in this system by studying the time spent in parallelization and the effectiveness of the results in terms of speedups.
Resumo:
The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process.
Resumo:
Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor time. Several researches have reported the development of software tools to support this process. Then, it is helpfulto get a quickly and good sight about their key features. This paper reviews an ample set of tools forautomatic grading of programming assignments. They are divided in those most important mature tools, which have remarkable features; and those built recently, with new features. The review includes the definition and description of key features e.g. supported languages, used technology, infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis infrastructure, etc. The two kinds of tools allow making a temporal comparative analysis. This analysis shows good improvements in this research field, these include security, more language support, plagiarism detection, etc. On the other hand, the lack of a grading model for assignments is identified as an important gap in the reviewed tools. Thus, a characterization of evaluation metrics to grade programming assignments is provided as first step to get a model. Finally new paths in this research field are proposed.