899 resultados para the SIMPLE algorithm
Resumo:
El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.
Resumo:
We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.
Resumo:
After the extensive research on the capabilities of the Boundary Integral Equation Method produced during the past years the versatility of its applications has been well founded. Maybe the years to come will see the in-depth analysis of several conflictive points, for example, adaptive integration, solution of the system of equations, etc. This line is clear in academic research. In this paper we comment on the incidence of the manner of imposing the boundary conditions in 3-D coupled problems. Here the effects are particularly magnified: in the first place by the simple model used (constant elements) and secondly by the process of solution, i.e. first a potential problem is solved and then the results are used as data for an elasticity problem. The errors add to both processes and small disturbances, unimportant in separated problems, can produce serious errors in the final results. The specific problem we have chosen is especially interesting. Although more general cases (i.e. transient)can be treated, here the domain integrals can be converted into boundary ones and the influence of the manner in which boundary conditions are applied will reflect the whole importance of the problem.
Resumo:
We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.
Resumo:
The complexity of planning a wireless sensor network is dependent on the aspects of optimization and on the application requirements. Even though Murphy's Law is applied everywhere in reality, a good planning algorithm will assist the designers to be aware of the short plates of their design and to improve them before the problems being exposed at the real deployment. A 3D multi-objective planning algorithm is proposed in this paper to provide solutions on the locations of nodes and their properties. It employs a developed ray-tracing scheme for sensing signal and radio propagation modelling. Therefore it is sensitive to the obstacles and makes the models of sensing coverage and link quality more practical compared with other heuristics that use ideal unit-disk models. The proposed algorithm aims at reaching an overall optimization on hardware cost, coverage, link quality and lifetime. Thus each of those metrics are modelled and normalized to compose a desirability function. Evolutionary algorithm is designed to efficiently tackle this NP-hard multi-objective optimization problem. The proposed algorithm is applicable for both indoor and outdoor 3D scenarios. Different parameters that affect the performance are analyzed through extensive experiments; two state-of-the-art algorithms are rebuilt and tested with the same configuration as that of the proposed algorithm. The results indicate that the proposed algorithm converges efficiently within 600 iterations and performs better than the compared heuristics.
Resumo:
The solution to the problem of finding the optimum mesh design in the finite element method with the restriction of a given number of degrees of freedom, is an interesting problem, particularly in the applications method. At present, the usual procedures introduce new degrees of freedom (remeshing) in a given mesh in order to obtain a more adequate one, from the point of view of the calculation results (errors uniformity). However, from the solution of the optimum mesh problem with a specific number of degrees of freedom some useful recommendations and criteria for the mesh construction may be drawn. For 1-D problems, namely for the simple truss and beam elements, analytical solutions have been found and they are given in this paper. For the more complex 2-D problems (plane stress and plane strain) numerical methods to obtain the optimum mesh, based on optimization procedures have to be used. The objective function, used in the minimization process, has been the total potential energy. Some examples are presented. Finally some conclusions and hints about the possible new developments of these techniques are also given.
Resumo:
In the field of dimensional metrology, the use of optical measuring machines requires the handling of a large number of measurement points, or scanning points, taken from the image of the measurand. The presence of correlation between these measurement points has a significant influence on the uncertainty of the result. The aim of this work is the development of an estimation procedure for the uncertainty of measurement in a geometrically elliptical shape, taking into account the correlation between the scanning points. These points are obtained from an image produced using a commercial flat bed scanner. The characteristic parameters of the ellipse (coordinates of the center, semi-axes and the angle of the semi-major axis with regard to the horizontal) are determined using a least squares fit and orthogonal distance regression. The uncertainty is estimated using the information from the auto-correlation function of the residuals and is propagated through the fitting algorithm according to the rules described in Evaluation of Measurement Data—Supplement 2 to the ‘Guide to the Expression of Uncertainty in Measurement’—Extension to any number of output quantities. By introducing the concept of cut-off length, it can be observed how it is possible to take into account the presence of the correlation in the estimation of uncertainty in a very simple way while avoiding underestimation.
Resumo:
The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images based on active contour models (snakes). In order to obtain an appropriate image for the use of snakes based techniques, the proposed algorithm combines a pre-processing stage including some traditional techniques (thresholding and median filter) with more sophisticated ones such as anisotropic filtering. The value selected for the thresholding was fixed to the 85% of the maximum peak of the image histogram, and the anisotropic filter permits to distinguish two intensity levels, one corresponding to the background and the other one to the foreground (glottis). The initialization carried out is based on the magnitude obtained using the Gradient Vector Flow field, ensuring an automatic process for the selection of the initial contour. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation. The results obtained suggest that this method provided results comparable with other techniques such as the proposed in (Osma-Ruiz et al., 2008).
Resumo:
Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.
Resumo:
The aim of this work is to develop an automated tool for the optimization of turbomachinery blades founded on an evolutionary strategy. This optimization scheme will serve to deal with supersonic blades cascades for application to Organic Rankine Cycle (ORC) turbines. The blade geometry is defined using parameterization techniques based on B-Splines curves, that allow to have a local control of the shape. The location in space of the control points of the B-Spline curve define the design variables of the optimization problem. In the present work, the performance of the blade shape is assessed by means of fully-turbulent flow simulations performed with a CFD package, in which a look-up table method is applied to ensure an accurate thermodynamic treatment. The solver is set along with the optimization tool to determine the optimal shape of the blade. As only blade-to-blade effects are of interest in this study, quasi-3D calculations are performed, and a single-objective evolutionary strategy is applied to the optimization. As a result, a non-intrusive tool, with no need for gradients definition, is developed. The computational cost is reduced by the use of surrogate models. A Gaussian interpolation scheme (Kriging model) is applied for the estimated n-dimensional function, and a surrogate-based local optimization strategy is proved to yield an accurate way for optimization. In particular, the present optimization scheme has been applied to the re-design of a supersonic stator cascade of an axial-flow turbine. In this design exercise very strong shock waves are generated in the rear blade suction side and shock-boundary layer interaction mechanisms occur. A significant efficiency improvement as a consequence of a more uniform flow at the blade outlet section of the stator is achieved. This is also expected to provide beneficial effects on the design of a subsequent downstream rotor. The method provides an improvement to gradient-based methods and an optimized blade geometry is easily achieved using the genetic algorithm.
Resumo:
The IARC competitions aim at making the state of the art in UAV progress. The 2014 challenge deals mainly with GPS/Laser denied navigation, Robot-Robot interaction and Obstacle avoidance in the setting of a ground robot herding problem. We present in this paper a drone which will take part in this competition. The platform and hardware it is composed of and the software we designed are introduced. This software has three main components: the visual information acquisition, the mapping algorithm and the Aritificial Intelligence mission planner. A statement of the safety measures integrated in the drone and of our efforts to ensure field testing in conditions as close as possible to the challenge?s is also included.
Resumo:
We propose in this work a very simple torsion-free beam element capable of capturing geometrical nonlinearities. The simple formulation is objective and unconditionally con- vergent for geometrically nonlinear models with large displacements, in the traditional sense that guarantees more precise numerical solutions for finer discretizations. The formulation does not employ rotational degrees of freedom, can be applied to two and three-dimensional problems, and it is computationally very efficient.
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
Sexpartite vaults constitute one of the most interesting chapters in European Gothic architecture. Originally, the use of the square cross-ribbed vault was limited to relatively small spaces, but when the need arose to cover spaces of considerable size, a new vault with very peculiar characteristics appeared. This new vault was a cross-ribbed vault that was reinforced in the centre by a rib that was parallel to the transverse ribs which effectively divided the vault in half. This configuration breaks the side arch into two fragments, creating a pair of windows on each side. The volumetrics of these vaults is extremely complex and the difficulties involved in their construction perhaps explain why they were abandoned in favour of the simple cross ribbed vault, now with rectangular sections. The existence of the sexpartite vault barely lasted more than fifty years, from the end of the XII century and the beginning of the XIII. Towards the end of the 19th century Viollet-le-Duc gave a succinct explanation of this type of vault. A. Choisy also, later, devotes some pages to the French sexpartite vault; since then, the subject has only been broached in a few references in later studies on Gothic architecture. However, despite its short period of existence, the sexpartite vault spread throughout Europe and was used to build important vaulting. Viollet-le-Duc's sexpartite vault could be considered to be the prototype of them all, while it is true that the studies that we have conducted so far lead us to affirm that there is a wide variety of vaults, with different volumetric spaces and different construction strategies. Therefore, we believe that this chapter of international Gothic deserves further study applying the knowledge and resources that are available today. This paper has been written to explore the most significant European sexpartite vaults. New measurement technology has led to a revolution in research into the history of construction, allowing studies to be conducted that were hitherto impossible. Thorough data collection using total station and photogrammetry has enabled us to identify the stereotomy of the voussoirs, tas-de-charges and keystones, as well as the bonding of the surfaces of the severies. A comparison of the construction techniques employed in the different vaults studied reveals common construction features and aspects that are specific to each country. Thus we are able to establish the relationship between sexpartite vaults in different European countries and their influence on each other.