963 resultados para Palaeomagnetism Applied to Geologic Processes
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
La cuestión principal abordada en esta tesis doctoral es la mejora de los sistemas biométricos de reconocimiento de personas a partir de la voz, proponiendo el uso de una nueva parametrización, que hemos denominado parametrización biométrica extendida dependiente de género (GDEBP en sus siglas en inglés). No se propone una ruptura completa respecto a los parámetros clásicos sino una nueva forma de utilizarlos y complementarlos. En concreto, proponemos el uso de parámetros diferentes dependiendo del género del locutor, ya que como es bien sabido, la voz masculina y femenina presentan características diferentes que deberán modelarse, por tanto, de diferente manera. Además complementamos los parámetros clásicos utilizados (MFFC extraídos de la señal de voz), con un nuevo conjunto de parámetros extraídos a partir de la deconstrucción de la señal de voz en sus componentes de fuente glótica (más relacionada con el proceso y órganos de fonación y por tanto con características físicas del locutor) y de tracto vocal (más relacionada con la articulación acústica y por tanto con el mensaje emitido). Para verificar la validez de esta propuesta se plantean diversos escenarios, utilizando diferentes bases de datos, para validar que la GDEBP permite generar una descripción más precisa de los locutores que los parámetros MFCC clásicos independientes del género. En concreto se plantean diferentes escenarios de identificación sobre texto restringido y texto independiente utilizando las bases de datos de HESPERIA y ALBAYZIN. El trabajo también se completa con la participación en dos competiciones internacionales de reconocimiento de locutor, NIST SRE (2010 y 2012) y MOBIO 2013. En el primer caso debido a la naturaleza de las bases de datos utilizadas se obtuvieron resultados cercanos al estado del arte, mientras que en el segundo de los casos el sistema presentado obtuvo la mejor tasa de reconocimiento para locutores femeninos. A pesar de que el objetivo principal de esta tesis no es el estudio de sistemas de clasificación, sí ha sido necesario analizar el rendimiento de diferentes sistemas de clasificación, para ver el rendimiento de la parametrización propuesta. En concreto, se ha abordado el uso de sistemas de reconocimiento basados en el paradigma GMM-UBM, supervectores e i-vectors. Los resultados que se presentan confirman que la utilización de características que permitan describir los locutores de manera más precisa es en cierto modo más importante que la elección del sistema de clasificación utilizado por el sistema. En este sentido la parametrización propuesta supone un paso adelante en la mejora de los sistemas de reconocimiento biométrico de personas por la voz, ya que incluso con sistemas de clasificación relativamente simples se consiguen tasas de reconocimiento realmente competitivas. ABSTRACT The main question addressed in this thesis is the improvement of automatic speaker recognition systems, by the introduction of a new front-end module that we have called Gender Dependent Extended Biometric Parameterisation (GDEBP). This front-end do not constitute a complete break with respect to classical parameterisation techniques used in speaker recognition but a new way to obtain these parameters while introducing some complementary ones. Specifically, we propose a gender-dependent parameterisation, since as it is well known male and female voices have different characteristic, and therefore the use of different parameters to model these distinguishing characteristics should provide a better characterisation of speakers. Additionally, we propose the introduction of a new set of biometric parameters extracted from the components which result from the deconstruction of the voice into its glottal source estimate (close related to the phonation process and the involved organs, and therefore the physical characteristics of the speaker) and vocal tract estimate (close related to acoustic articulation and therefore to the spoken message). These biometric parameters constitute a complement to the classical MFCC extracted from the power spectral density of speech as a whole. In order to check the validity of this proposal we establish different practical scenarios, using different databases, so we can conclude that a GDEBP generates a more accurate description of speakers than classical approaches based on gender-independent MFCC. Specifically, we propose scenarios based on text-constrain and text-independent test using HESPERIA and ALBAYZIN databases. This work is also completed with the participation in two international speaker recognition evaluations: NIST SRE (2010 and 2012) and MOBIO 2013, with diverse results. In the first case, due to the nature of the NIST databases, we obtain results closed to state-of-the-art although confirming our hypothesis, whereas in the MOBIO SRE we obtain the best simple system performance for female speakers. Although the study of classification systems is beyond the scope of this thesis, we found it necessary to analise the performance of different classification systems, in order to verify the effect of them on the propose parameterisation. In particular, we have addressed the use of speaker recognition systems based on the GMM-UBM paradigm, supervectors and i-vectors. The presented results confirm that the selection of a set of parameters that allows for a more accurate description of the speakers is as important as the selection of the classification method used by the biometric system. In this sense, the proposed parameterisation constitutes a step forward in improving speaker recognition systems, since even when using relatively simple classification systems, really competitive recognition rates are achieved.
Resumo:
In the last decade we have seen how small and light weight aerial platforms - aka, Mini Unmanned Aerial Vehicles (MUAV) - shipped with heterogeneous sensors have become a 'most wanted' Remote Sensing (RS) tool. Most of the off-the-shelf aerial systems found in the market provide way-point navigation. However, they do not rely on a tool that compute the aerial trajectories considering all the aspects that allow optimizing the aerial missions. One of the most demanded RS applications of MUAV is image surveying. The images acquired are typically used to build a high-resolution image, i.e., a mosaic of the workspace surface. Although, it may be applied to any other application where a sensor-based map must be computed. This thesis provides a study of this application and a set of solutions and methods to address this kind of aerial mission by using a fleet of MUAVs. In particular, a set of algorithms are proposed for map-based sampling, and aerial coverage path planning (ACPP). Regarding to map-based sampling, the approaches proposed consider workspaces with different shapes and surface characteristics. The workspace is sampled considering the sensor characteristics and a set of mission requirements. The algorithm applies different computational geometry approaches, providing a unique way to deal with workspaces with different shape and surface characteristics in order to be surveyed by one or more MUAVs. This feature introduces a previous optimization step before path planning. After that, the ACPP problem is theorized and a set of ACPP algorithms to compute the MUAVs trajectories are proposed. The problem addressed herein is the problem to coverage a wide area by using MUAVs with limited autonomy. Therefore, the mission must be accomplished in the shortest amount of time. The aerial survey is usually subject to a set of workspace restrictions, such as the take-off and landing positions as well as a safety distance between elements of the fleet. Moreover, it has to avoid forbidden zones to y. Three different algorithms have been studied to address this problem. The approaches studied are based on graph searching, heuristic and meta-heuristic approaches, e.g., mimic, evolutionary. Finally, an extended survey of field experiments applying the previous methods, as well as the materials and methods adopted in outdoor missions is presented. The reported outcomes demonstrate that the findings attained from this thesis improve ACPP mission for mapping purpose in an efficient and safe manner.
Resumo:
The method presented in this paper addresses the problem of voltage sag state estimation (VSSE). The problem consists in estimating the voltage sags frequency at non-monitored buses from the number of sags measured at monitored sites. Usually, due to limitations on the number of available voltage sag monitors, this is an underdetermined problem. In this approach, the mathematical formulation presented is based on the fault positions concept and is solved by means of the Singular Value Decomposition (SVD) technique. The proposed estimation method has been validated by using the IEEE 118 test system and the results obtained have been very satisfactory.
Resumo:
In the last decade we have seen how small and light weight aerial platforms - aka, Mini Unmanned Aerial Vehicles (MUAV) - shipped with heterogeneous sensors have become a 'most wanted' Remote Sensing (RS) tool. Most of the off-the-shelf aerial systems found in the market provide way-point navigation. However, they do not rely on a tool that compute the aerial trajectories considering all the aspects that allow optimizing the aerial missions. One of the most demanded RS applications of MUAV is image surveying. The images acquired are typically used to build a high-resolution image, i.e., a mosaic of the workspace surface. Although, it may be applied to any other application where a sensor-based map must be computed. This thesis provides a study of this application and a set of solutions and methods to address this kind of aerial mission by using a fleet of MUAVs. In particular, a set of algorithms are proposed for map-based sampling, and aerial coverage path planning (ACPP). Regarding to map-based sampling, the approaches proposed consider workspaces with different shapes and surface characteristics. The workspace is sampled considering the sensor characteristics and a set of mission requirements. The algorithm applies different computational geometry approaches, providing a unique way to deal with workspaces with different shape and surface characteristics in order to be surveyed by one or more MUAVs. This feature introduces a previous optimization step before path planning. After that, the ACPP problem is theorized and a set of ACPP algorithms to compute the MUAVs trajectories are proposed. The problem addressed herein is the problem to coverage a wide area by using MUAVs with limited autonomy. Therefore, the mission must be accomplished in the shortest amount of time. The aerial survey is usually subject to a set of workspace restrictions, such as the take-off and landing positions as well as a safety distance between elements of the fleet. Moreover, it has to avoid forbidden zones to y. Three different algorithms have been studied to address this problem. The approaches studied are based on graph searching, heuristic and meta-heuristic approaches, e.g., mimic, evolutionary. Finally, an extended survey of field experiments applying the previous methods, as well as the materials and methods adopted in outdoor missions is presented. The reported outcomes demonstrate that the findings attained from this thesis improve ACPP mission for mapping purpose in an efficient and safe manner.
Resumo:
Anew, simple, and quick-calculationmethodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Resumo:
State convergence is a control strategy that was proposed in the early 2000s to ensure stability and transparency in a teleoperation system under specific control gains values. This control strategy has been implemented for a linear system with or without time delay. This paper represents the first attempt at demonstrating, theoretically and experimentantally, that this control strategy can also be applied to a nonlinear teleoperation system with n degrees of freedom and delay in the communication channel. It is assumed that the human operator applies a constant force on the local manipulator during the teleoperation. In addition, the interaction between the remote manipulator and the environment is considered passive. Communication between the local and remote sites is made by means of a communication channel with variable time delay. In this article the theory of Lyapunov-Krasovskii was used to demonstrate that the local-remote teleoperation system is asymptotically stable.
Resumo:
Análisis critic de la aplicación de los estudios de future al campo de la planificación urbana
Resumo:
An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values of dust re-suspended during a LOVA.
Resumo:
In the European context of upgrading the housing stock energy performance, multiple barriers hinder the wide uptake of sustainable retrofitting practices. Moreover, some of these may imply negative effects often disregarded. Policy makers need to identify how to increase and improve retrofitting practices from the comprehensive point of view of sustainability. None of the existing assessment tools addresses all the issues relevant for sustainable development in a local situation from a life cycle perspective. Life cycle sustainability assessment methodology, or LCSA, analyzes environmental and socioeconomic impacts. The environmental part is quite developed, but the socioeconomic aspect is still challenging. This work proposes socioeconomic criteria to be included in a LCSA to assess retrofitting works in the specific context of Brussels-Capital Region. LCSA feasibility and challenging methodology aspects are discussed.
Resumo:
Estudios dendroecológicos para el análisis de regímentes torrenciales y avenidas
Resumo:
We consider the situation where there are several alternatives for investing a quantity of money to achieve a set of objectives. The choice of which alternative to apply depends on how citizens and political representatives perceive that such objectives should be achieved. All citizens with the right to vote can express their preferences in the decision-making process. These preferences may be incomplete. Political representatives represent the citizens who have not taken part in the decision-making process. The weight corresponding to political representatives depends on the number of citizens that have intervened in the decision-making process. The methodology we propose needs the participants to specify for each alternative how they rate the different attributes and the relative importance of attributes. On the basis of this information an expected utility interval is output for each alternative. To do this, an evidential reasoning approach is applied. This approach improves the insightfulness and rationality of the decision-making process using a belief decision matrix for problem modeling and the Dempster?Shafer theory of evidence for attribute aggregation. Finally, we propose using the distances of each expected utility interval from the maximum and the minimum utilities to rank the alternative set. The basic idea is that an alternative is ranked first if its distance to the maximum utility is the smallest, and its distance to the minimum utility is the greatest. If only one of these conditions is satisfied, a distance ratio is then used.
Resumo:
Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.
Resumo:
One of the main concerns when conducting a dam test is the acute determination of the hydrograph for a specific flood event. The use of 2D direct rainfall hydraulic mathematical models on a finite elements mesh, combined with the efficiency of vector calculus that provides CUDA (Compute Unified Device Architecture) technology, enables nowadays the simulation of complex hydrological models without the need for terrain subbasin and transit splitting (as in HEC-HMS). Both the Spanish PNOA (National Plan of Aereal Orthophotography) Digital Terrain Model GRID with a 5 x 5 m accuracy and the CORINE GIS Land Cover (Coordination of INformation of the Environment) that allows assessment of the ground roughness, provide enough data to easily build these kind of models
Resumo:
The carbonation of concrete or the chlorides ingress in such quantity to reach the level of bars is triggers of reinforcement corrosion. One of the most significant effects of reinforcing steel corrosion on reinforced concrete structures is the decline in the ductility-related properties of the steel. Reinforcement ductility has a decisive effect on the overall ductility of reinforced concrete structures. Different Codes classify the type of steel depending on their ductility defined by the minimum values of several parameters. Using indicators of ductility associating different properties can be advantageous on many occasions. It is considered necessary to define the ductility by means of a single parameter that considers strength values and deformation simultaneously. There are a number of criteria for defining steel ductility by a single parameter. The present experimental study addresses the variation in the ductility of concrete-embedded steel bars when exposed to accelerated corrosion. This paper analyzes the suitability of a new indicator of ductility used in corroded bars.