15 resultados para Method of Theoretical Images

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto se ha desarrollado un código de MATLAB para el procesamiento de imágenes tomográficas 3D, de muestras de asfalto de carreteras en Polonia. Estas imágenes en 3D han sido tomadas por un equipo de investigación de la Universidad Tecnológica de Lodz (LUT). El objetivo de este proyecto es crear una herramienta que se pueda utilizar para estudiar las diferentes muestras de asfalto 3D y pueda servir para estudiar las pruebas de estrés que experimentan las muestras en el laboratorio. Con el objetivo final de encontrar soluciones a la degradación sufrida en las carreteras de Polonia, debido a diferentes causas, como son las condiciones meteorológicas. La degradación de las carreteras es un tema que se ha investigado desde hace muchos años, debido a la fuerte degradación causada por diferentes factores como son climáticos, la falta de mantenimiento o el tráfico excesivo en algunos casos. Es en Polonia, donde estos tres factores hacen que la composición de muchas carreteras se degrade rápidamente, sobre todo debido a las condiciones meteorológicas sufridas a lo largo del año, con temperaturas que van desde 30° C en verano a -20° C en invierno. Esto hace que la composición de las carreteras sufra mucho y el asfalto se levante, lo que aumenta los costos de mantenimiento y los accidentes de carretera. Este proyecto parte de la base de investigación que se lleva a cabo en la LUT, tratando de mejorar el análisis de las muestras de asfalto, por lo que se realizarán las pruebas de estrés y encontrar soluciones para mejorar el asfalto en las carreteras polacas. Esto disminuiría notablemente el costo de mantenimiento. A pesar de no entrar en aspectos muy técnicos sobre el asfalto y su composición, se ha necesitado realizar un estudio profundo sobre todas sus características, para crear un código capaz de obtener los mejores resultados. Por estas razones, se ha desarrollado en Matlab, los algoritmos que permiten el estudio de los especímenes 3D de asfalto. Se ha utilizado este software, ya que Matlab es una poderosa herramienta matemática que permite operar con matrices para realización de operaciones rápidamente, permitiendo desarrollar un código específico para el tratamiento y procesamiento de imágenes en 3D. Gracias a esta herramienta, estos algoritmos realizan procesos tales como, la segmentación de la imagen 3D, pre y post procesamiento de la imagen, filtrado o todo tipo de análisis microestructural de las muestras de asfalto que se están estudiando. El código presentado para la segmentación de las muestras de asfalto 3D es menos complejo en su diseño y desarrollo, debido a las herramientas de procesamiento de imágenes que incluye Matlab, que facilitan significativamente la tarea de programación, así como el método de segmentación utilizado. Respecto al código, este ha sido diseñado teniendo en cuenta el objetivo de facilitar el trabajo de análisis y estudio de las imágenes en 3D de las muestras de asfalto. Por lo tanto, el principal objetivo es el de crear una herramienta para el estudio de este código, por ello fue desarrollado para que pueda ser integrado en un entorno visual, de manera que sea más fácil y simple su utilización. Ese es el motivo por el cual todos estos algoritmos y funciones, que ha sido desarrolladas, se integrarán en una herramienta visual que se ha desarrollado con el GUIDE de Matlab. Esta herramienta ha sido creada en colaboración con Jorge Vega, y fue desarrollada en su proyecto final de carrera, cuyo título es: Segmentación microestructural de Imágenes en 3D de la muestra de asfalto utilizando Matlab. En esta herramienta se ha utilizado todo las funciones programadas en este proyecto, y tiene el objetivo de desarrollar una herramienta que permita crear un entorno gráfico intuitivo y de fácil uso para el estudio de las muestras de 3D de asfalto. Este proyecto se ha dividido en 4 capítulos, en un primer lugar estará la introducción, donde se presentarán los aspectos más importante que se va a componer el proyecto. En el segundo capítulo se presentarán todos los datos técnicos que se han tenido que estudiar para desarrollar la herramienta, entre los que cabe los tres temas más importantes que se han estudiado en este proyecto: materiales asfálticos, los principios de la tomografías 3D y el procesamiento de imágenes. Esta será la base para el tercer capítulo, que expondrá la metodología utilizada en la elaboración del código, con la explicación del entorno de trabajo utilizado en Matlab y todas las funciones de procesamiento de imágenes utilizadas. Además, se muestra todo el código desarrollado, así como una descripción teórica de los métodos utilizados para el pre-procesamiento y segmentación de las imagenes en 3D. En el capítulo 4, se mostrarán los resultados obtenidos en el estudio de una de las muestras de asfalto, y, finalmente, el último capítulo se basa en las conclusiones sobre el desarrollo de este proyecto. En este proyecto se ha llevado han realizado todos los puntos que se establecieron como punto de partida en el anteproyecto para crear la herramienta, a pesar de que se ha dejado para futuros proyectos nuevas posibilidades de este codigo, como por ejemplo, la detección automática de las diferentes regiones de una muestra de asfalto debido a su composición. Como se muestra en este proyecto, las técnicas de procesamiento de imágenes se utilizan cada vez más en multitud áreas, como pueden ser industriales o médicas. En consecuencia, este tipo de proyecto tiene multitud de posibilidades, y pudiendo ser la base para muchas nuevas aplicaciones que se puedan desarrollar en un futuro. Por último, se concluye que este proyecto ha contribuido a fortalecer las habilidades de programación, ampliando el conocimiento de Matlab y de la teoría de procesamiento de imágenes. Del mismo modo, este trabajo proporciona una base para el desarrollo de un proyecto más amplio cuyo alcance será una herramienta que puedas ser utilizada por el equipo de investigación de la Universidad Tecnológica de Lodz y en futuros proyectos. ABSTRACT In this project has been developed one code in MATLAB to process X-ray tomographic 3D images of asphalt specimens. These images 3D has been taken by a research team of the Lodz University of Technology (LUT). The aim of this project is to create a tool that can be used to study differents asphalt specimen and can be used to study them after stress tests undergoing the samples. With the final goal to find solutions to the degradation suffered roads in Poland due to differents causes, like weather conditions. The degradation of the roads is an issue that has been investigated many years ago, due to strong degradation suffered caused by various factors such as climate, poor maintenance or excessive traffic in some cases. It is in Poland where these three factors make the composition of many roads degrade rapidly, especially due to the weather conditions suffered along the year, with temperatures ranging from 30 o C in summer to -20 ° C in winter. This causes the roads suffers a lot and asphalt rises shortly after putting, increasing maintenance costs and road accident. This project part of the base that research is taking place at the LUT, in order to better analyze the asphalt specimens, they are tested for stress and find solutions to improve the asphalt on Polish roads. This would decrease remarkable maintenance cost. Although this project will not go into the technical aspect as asphalt and composition, but it has been required a deep study about all of its features, to create a code able to obtain the best results. For these reasons, there have been developed in Matlab, algorithms that allow the study of 3D specimens of asphalt. Matlab is a powerful mathematical tool, which allows arrays operate fastly, allowing to develop specific code for the treatment and processing of 3D images. Thus, these algorithms perform processes such as the multidimensional matrix sgementation, pre and post processing with the same filtering algorithms or microstructural analysis of asphalt specimen which being studied. All these algorithms and function that has been developed to be integrated into a visual tool which it be developed with the GUIDE of Matlab. This tool has been created in the project of Jorge Vega which name is: Microstructural segmentation of 3D images of asphalt specimen using Matlab engine. In this tool it has been used all the functions programmed in this project, and it has the aim to develop an easy and intuitive graphical environment for the study of 3D samples of asphalt. This project has been divided into 4 chapters plus the introduction, the second chapter introduces the state-of-the-art of the three of the most important topics that have been studied in this project: asphalt materials, principle of X-ray tomography and image processing. This will be the base for the third chapter, which will outline the methodology used in developing the code, explaining the working environment of Matlab and all the functions of processing images used. In addition, it will be shown all the developed code created, as well as a theoretical description of the methods used for preprocessing and 3D image segmentation. In Chapter 4 is shown the results obtained from the study of one of the specimens of asphalt, and finally the last chapter draws the conclusions regarding the development of this project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data acquired by Remote Sensing systems allow obtaining thematic maps of the earth's surface, by means of the registered image classification. This implies the identification and categorization of all pixels into land cover classes. Traditionally, methods based on statistical parameters have been widely used, although they show some disadvantages. Nevertheless, some authors indicate that those methods based on artificial intelligence, may be a good alternative. Thus, fuzzy classifiers, which are based on Fuzzy Logic, include additional information in the classification process through based-rule systems. In this work, we propose the use of a genetic algorithm (GA) to select the optimal and minimum set of fuzzy rules to classify remotely sensed images. Input information of GA has been obtained through the training space determined by two uncorrelated spectral bands (2D scatter diagrams), which has been irregularly divided by five linguistic terms defined in each band. The proposed methodology has been applied to Landsat-TM images and it has showed that this set of rules provides a higher accuracy level in the classification process

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Once admitted the advantages of object-based classification compared to pixel-based classification; the need of simple and affordable methods to define and characterize objects to be classified, appears. This paper presents a new methodology for the identification and characterization of objects at different scales, through the integration of spectral information provided by the multispectral image, and textural information from the corresponding panchromatic image. In this way, it has defined a set of objects that yields a simplified representation of the information contained in the two source images. These objects can be characterized by different attributes that allow discriminating between different spectral&textural patterns. This methodology facilitates information processing, from a conceptual and computational point of view. Thus the vectors of attributes defined can be used directly as training pattern input for certain classifiers, as for example artificial neural networks. Growing Cell Structures have been used to classify the merged information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the fusion of probabilistic knowledge-based classification rules and learning automata theory is proposed and as a result we present a set of probabilistic classification rules with self-learning capability. The probabilities of the classification rules change dynamically guided by a supervised reinforcement process aimed at obtaining an optimum classification accuracy. This novel classifier is applied to the automatic recognition of digital images corresponding to visual landmarks for the autonomous navigation of an unmanned aerial vehicle (UAV) developed by the authors. The classification accuracy of the proposed classifier and its comparison with well-established pattern recognition methods is finally reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expert knowledge is used to assign probabilities to events in many risk analysis models. However, experts sometimes find it hard to provide specific values for these probabilities, preferring to express vague or imprecise terms that are mapped using a previously defined fuzzy number scale. The rigidity of these scales generates bias in the probability elicitation process and does not allow experts to adequately express their probabilistic judgments. We present an interactive method for extracting a fuzzy number from experts that represents their probabilistic judgments for a given event, along with a quality measure of the probabilistic judgments, useful in a final information filtering and analysis sensitivity process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the fundamental aspects in the adaptation of the teaching to the European higher education is changing based models of teacher education to models based on student learning. In this work we present an educational experience developed with the teaching method based on the case method, with a clearly multidisciplinary. The experience has been developed in the teaching of analysis and verification of safety rails. This is a multidisciplinary field that presents great difficulties during their teaching. The use of the case method has given good results in the competences achieved by students

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem-solving competence. The first step has been to elaborate a guide for teachers to provide a homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provides the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem-solving competence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter ?. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Important physical and biological processes in soil-plant-microbial systems are dominated by the geometry of soil pore space, and a correct model of this geometry is critical for understanding them. We analyze the geometry of soil pore space with the X-ray computed tomography (CT) of intact soil columns. We present here some preliminary results of our investigation on Minkowski functionals of parallel sets to characterize soil structure. We also show how the evolution of Minkowski morphological measurements of parallel sets may help to characterize the influence of conventional tillage and permanent cover crop of resident vegetation on soil structure in a Spanish Mediterranean vineyard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem solving competence. The first step has been to elaborate a guide for teachers to provide an homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provided the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem solving competence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Monge–Ampère (MA) equation arising in illumination design is highly nonlinear so that the convergence of the MA method is strongly determined by the initial design. We address the initial design of the MA method in this paper with the L2 Monge-Kantorovich (LMK) theory, and introduce an efficient approach for finding the optimal mapping of the LMK problem. Three examples, including the beam shaping of collimated beam and point light source, are given to illustrate the potential benefits of the LMK theory in the initial design. The results show the MA method converges more stably and faster with the application of the LMK theory in the initial design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concentrating Photovoltaics (CPV) is an alternative to flat-plate module photovoltaic (PV) technology. The bankability of CPV projects is an important issue to pave the way toward a swift and sustained growth in this technology. The bankability of a PV plant is generally addressed through the modeling of its energy yield under a baseline loss scenario, followed by an on-site measurement campaign aimed at verifying its energy performance. This paper proposes a procedure for assessing the performance of a CPV project, articulated around four main successive steps: Solar Resource Assessment, Yield Assessment, Certificate of Provisional Acceptance, and Certificate of Final Acceptance. This methodology allows the long-term energy production of a CPV project to be estimated with an associated uncertainty of ≈5%. To our knowledge, no such method has been proposed to the CPV industry yet, and this critical situation has hindered or made impossible the completion of several important CPV projects undertaken in the world. The main motive for this proposed method is to bring a practical solution to this urgent problem. This procedure can be operated under a wide range of climatic conditions, and makes it possible to assess the bankability of a CPV plant whose design uses any of the technologies currently available on the market. The method is also compliant with both international standards and local regulations. In consequence, its applicability is both general and international.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of type of fiber, site of fermetation, method for quantifying insoluble and soluble dietary fiber, and their correction for intestinal mucin on fiber digestibility were examined in rabbits. Three diets differing in soluble fiber were formulated (8.5% soluble fiber, on DM basis, in the low soluble fiber [LSF] diet; 10.2% in the medium soluble fiber [MSF] diet; and 14.5% in the high soluble fiber [HSF] diet). They were obtained by replacing half of the dehydrated alfalfa in the MSF diet with a mixture of beet and apple pulp (HSF diet) or with a mix of oat hulls and soybean protein (LSF diet). Thirty rabbits with ileal T-cannulas were used to determine ileal and fecal digestibility. Cecal digestibility was determined by difference between fecal and ileal digestibility. Insoluble fiber was measured as NDF, insoluble dietary fiber (IDF), and in vitro insoluble fiber, whereas soluble fiber was calculated as the difference between total dietary fiber (TDF) and NDF (TDF_NDF), IDF (TDF-IDF), and in vitro insoluble fiber (TDF-in vitro insoluble fiber). The intestinal mucin content was used to correct the TDF and soluble fiber digestibility. Ileal and fecal concentration of mucin increased from the LSF to the HSF diet group (P < 0.01). Once corrected for intestinal mucin, ileal and fecal digestibility of TDF and soluble fiber increased whereas cecal digestibility decreased (P < 0.01). Ileal digestibility of TDF increased from the LSF to the HSF diet group (12.0 vs. 28.1%; P < 0.01), with no difference in the cecum (26.4%), resulting in a higher fecal digestibility from the LSF to the HSF diet group (P < 0.01). Ileal digestibility of insoluble fiber increased from the LSF to the HSF diet group (11.3 vs. 21.0%; P < 0.01), with no difference in the cecum (13.9%) and no effect of fiber method, resulting in a higher fecal digestibility for rabbits fed the HSF diet compared with the MSF and LSF diets groups (P < 0.01).Fecal digestibility of NDF was higher compared with IDF or in vitro insoluble fiber (P < 0.01). Ileal soluble fiber digestibility was higher for the HSF than for the LSF diet group (43.6 vs. 14.5%; P < 0.01) and fiber method did not affect it. Cecal soluble fiber digestibility decreased from the LSF to the HSF diet group (72.1 vs. 49.2%; P < 0.05). The lowest cecal and fecal soluble fiber digestibility was measured using TDF-NDF (P < 0.01). In conclusion, a correction for intestinal mucin is necessary for ileal TDF and soluble fiber digestibility whereas the selection of the fiber method has a minor relevance. The inclusion of sugar beet and apple pulp increased the amount of TDF fermented in the small intestine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The different theoretical models related with storm wave characterization focus on determining the significant wave height of the peak storm, the mean period and, usually assuming a triangle storm shape, their duration. In some cases, the main direction is also considered. Nevertheless, definition of the whole storm history, including the variation of the main random variables during the storm cycle is not taken into consideration. The representativeness of the proposed storm models, analysed in a recent study using an empirical maximum energy flux time dependent function shows that the behaviour of the different storm models is extremely dependent on the climatic characteristics of the project area. Moreover, there are no theoretical models able to adequately reproduce storm history evolution of the sea states characterized by important swell components. To overcome this shortcoming, several theoretical storm shapes are investigated taking into consideration the bases of the three best theoretical storm models, the Equivalent Magnitude Storm (EMS), the Equivalent Number of Waves Storm (ENWS) and the Equivalent Duration Storm (EDS) models. To analyse the representativeness of the new storm shape, the aforementioned maximum energy flux formulation and a wave overtopping discharge structure function are used. With the empirical energy flux formulation, correctness of the different approaches is focussed on the progressive hydraulic stability loss of the main armour layer caused by real and theoretical storms. For the overtopping structure equation, the total volume of discharge is considered. In all cases, the results obtained highlight the greater representativeness of the triangular EMS model for sea waves and the trapezoidal (nonparallel sides) EMS model for waves with a higher degree of wave development. Taking into account the increase in offshore and shallow water wind turbines, maritime transport and deep vertical breakwaters, the maximum wave height of the whole storm history and that corresponding to each sea state belonging to its cycle's evolution is also considered. The procedure considers the information usually available for extreme waves' characterization. Extrapolations of the maximum wave height of the selected storms have also been considered. The 4th order statistics of the sea state belonging to the real and theoretical storm have been estimated to complete the statistical analysis of individual wave height

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper some topics related to the design of reinforced concrete (RC) shells are addressed. The influence of the reinforcement layout on the service and ultimate behavior of the shell structure is commented. The well established methodology for dimensioning and verifying RC sections of beam structures is extended. In this way it is possible to treat within a unified procedure the design and verification of RC two dimensional structures, in particular membrane and shell structures. Realistic design situations as multiple steel farnilies and non orthogonal reinforcement layout can be handled. Finally, some examples and applications of the proposed methodology are presented.