948 resultados para HETEROGENEOUS ELASTOGRAPHY PHANTOMS
Resumo:
Presented at Work in Progress Session, IEEE Real-Time Systems Symposium (RTSS 2015). 1 to 4, Dec, 2015. San Antonio, U.S.A..
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
This article deals with a real-life waste collection routing problem. To efficiently plan waste collection, large municipalities may be partitioned into convenient sectors and only then can routing problems be solved in each sector. Three diverse situations are described, resulting in three different new models. In the first situation, there is a single point of waste disposal from where the vehicles depart and to where they return. The vehicle fleet comprises three types of collection vehicles. In the second, the garage does not match any of the points of disposal. The vehicle is unique and the points of disposal (landfills or transfer stations) may have limitations in terms of the number of visits per day. In the third situation, disposal points are multiple (they do not coincide with the garage), they are limited in the number of visits, and the fleet is composed of two types of vehicles. Computational results based not only on instances adapted from the literature but also on real cases are presented and analyzed. In particular, the results also show the effectiveness of combining sectorization and routing to solve waste collection problems.
Resumo:
Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
Rupture of aortic aneurysms (AA) is a major cause of death in the Western world. Currently, clinical decision upon surgical intervention is based on the diameter of the aneurysm. However, this method is not fully adequate. Noninvasive assessment of the elastic properties of the arterial wall can be a better predictor for AA growth and rupture risk. The purpose of this study is to estimate mechanical properties of the aortic wall using in vitro inflation testing and 2D ultrasound (US) elastography, and investigate the performance of the proposed methodology for physiological conditions. Two different inflation experiments were performed on twelve porcine aortas: 1) a static experiment for a large pressure range (0 – 140 mmHg); 2) a dynamic experiment closely mimicking the in vivo hemodynamics at physiological pressures (70 – 130 mmHg). 2D raw radiofrequency (RF) US datasets were acquired for one longitudinal and two cross-sectional imaging planes, for both experiments. The RF-data were manually segmented and a 2D vessel wall displacement tracking algorithm was applied to obtain the aortic diameter–time behavior. The shear modulus G was estimated assuming a Neo-Hookean material model. In addition, an incremental study based on the static data was performed to: 1) investigate the changes in G for increasing mean arterial pressure (MAP), for a certain pressure difference (30, 40, 50 and 60 mmHg); 2) compare the results with those from the dynamic experiment, for the same pressure range. The resulting shear modulus G was 94 ± 16 kPa for the static experiment, which is in agreement with literature. A linear dependency on MAP was found for G, yet the effect of the pressure difference was negligible. The dynamic data revealed a G of 250 ± 20 kPa. For the same pressure range, the incremental shear modulus (Ginc) was 240 ± 39 kPa, which is in agreement with the former. In general, for all experiments, no significant differences in the values of G were found between different image planes. This study shows that 2D US elastography of aortas during inflation testing is feasible under controlled and physiological circumstances. In future studies, the in vivo, dynamic experiment should be repeated for a range of MAPs and pathological vessels should be examined. Furthermore, the use of more complex material models needs to be considered to describe the non-linear behavior of the vascular tissue.
Resumo:
DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.
Resumo:
The currently available clinical imaging methods do not provide highly detailed information about location and severity of axonal injury or the expected recovery time of patients with traumatic brain injury [1]. High-Definition Fiber Tractography (HDFT) is a novel imaging modality that allows visualizing and quantifying, directly, the degree of axons damage, predicting functional deficits due to traumatic axonal injury and loss of cortical projections. This imaging modality is based on diffusion technology [2]. The inexistence of a phantom able to mimic properly the human brain hinders the possibility of testing, calibrating and validating these medical imaging techniques. Most research done in this area fails in key points, such as the size limit reproduced of the brain fibers and the quick and easy reproducibility of phantoms [3]. For that reason, it is necessary to develop similar structures matching the micron scale of axon tubes. Flexible textiles can play an important role since they allow producing controlled packing densities and crossing structures that match closely the human crossing patterns of the brain. To build a brain phantom, several parameters must be taken into account in what concerns to the materials selection, like hydrophobicity, density and fiber diameter, since these factors influence directly the values of fractional anisotropy. Fiber cross-section shape is other important parameter. Earlier studies showed that synthetic fibrous materials are a good choice for building a brain phantom [4]. The present work is integrated in a broader project that aims to develop a brain phantom made by fibrous materials to validate and calibrate HDFT. Due to the similarity between thousands of hollow multifilaments in a fibrous arrangement, like a yarn, and the axons, low twist polypropylene multifilament yarns were selected for this development. In this sense, extruded hollow filaments were analysed in scanning electron microscope to characterize their main dimensions and shape. In order to approximate the dimensional scale to human axons, five types of polypropylene yarns with different linear density (denier) were used, aiming to understand the effect of linear density on the filament inner and outer areas. Moreover, in order to achieve the required dimensions, the polypropylene filaments cross-section was diminished in a drawing stage of a filament extrusion line. Subsequently, tensile tests were performed to characterize the mechanical behaviour of hollow filaments and to evaluate the differences between stretched and non-stretched filaments. In general, an increase of the linear density causes the increase in the size of the filament cross section. With the increase of structure orientation of filaments, induced by stretching, breaking tenacity increases and elongation at break decreases. The production of hollow fibers, with the required characteristics, is one of the key steps to create a brain phantom that properly mimics the human brain that may be used for the validation and calibration of HDFT, an imaging approach that is expected to contribute significantly to the areas of brain related research.
Resumo:
El desarrollo de conocimiento empírico sobre cómo la heterogeneidad espacial de un paisaje afecta los patrones de movimiento de una especie animal es considerado una prioridad para el manejo y la conservación de las especies y sus hábitats. En el caso de los insectos plaga, estos estudios resultan importantes ya que aportan las bases teóricas y empíricas fundamentales para su manejo. La persistencia de éstas especies en un paisaje modificado depende de la interrelación entre procesos ecológicos y la estructura del paisaje, tales como la interacción entre especies, la disponibilidad de parches hábitat y la influencia de las prácticas de manejo. El análisis de éstos procesos en un agroecosistema permite simplificar los modelos de heterogeneidad espacial, debido a que los lotes de cultivo son internamente homogéneos y los disturbios antropogénicos generalmente ocurren a la escala de parche, permitiendo determinar las respuestas de los insectos a dicha escala. La alfalfa (Medicago sativa) es un recurso fundamental para la producción agropecuaria y en Argentina, es el recurso forrajero más importante, constituyendo la base de la producción ganadera del país. Actualmente se cultivan alrededor de 5 millones de hectáreas, de las cuales un millón se siembran en la provincia de Córdoba. Además, cumple un rol importante en la sustentabilidad de los sistemas de producción por su función de recuperación de la fertilidad y estabilidad edáfica. La isoca de la alfalfa (Colias lesbia) es la plaga principal del cultivo, produciendo en promedio la pérdida de un corte por año. La hipótesis principal de nuestro trabajo es que los patrones de abundancia y movilidad de la isoca de la alfalfa son afectados por la estructura del paisaje y las prácticas de manejo. Los objetivos específicos del proyecto son: (a) Establecer el efecto de la estructura del paisaje y y el manejo del cultivo en la abundancia de los distintos estadios de Colias lesbia. (b) Determinar los patrones de dispersión de Colias lesbia en relación a la heterogeneidad espacial del paisaje (c) Generar un modelo predictivo de la abundancia de Colias lesbia según la estructura espacial del paisaje, el clima y el manejo del cultivo. (d) Desarrollar un conjunto de recomendaciones de manejo a escala regional para el control de la isoca de la alfalfa. Para ello se elegirán lotes de alfalfa en la región este de la provincia de Córdoba, en el departamento de San Justo, donde se realizará un relevamiento inicial del área de estudio y se dialogará con los productores. Paralelamente, se realizará una clasificación supervisada del área de estudio a partir de escenas de imágenes Landsat TM. En los parches seleccionados, durante 3 años y durante los meses de verano, se muestrearán quincenalmente los distintos estadios de Colias lesbia. Se realizarán análisis de correlación y regresión entre las variables independientes (métricas de la configuración y dinámica del paisaje) y las variables dependientes, (abundancia media de los diferentes estadios de las poblaciones). Asimismo, se realizarán experimentos de marcado-liberación-recaptura para determinar cómo el movimiento de la especie depende de la estructura del paisaje. Para modelar el movimiento inherente de la especie se combinará la información obtenida en el campo con un modelo de difusión utilizando métodos bayesianos. Se espera obtener modelos que permitan comprender los mecanismos que generan los patrones observados. Con esta información se propondrán lineamientos generales y específicos para un manejo de la isoca de la alfalfa a escala regional. En tal sentido, se espera aportar información para restringir la dispersión de la plaga, y reducir los costos y perjuicios del control químico que podrían evitarse con la aplicación de prácticas de manejo integrado y de "manejo de área" que minimicen el impacto de la plaga como también contribuir al conocimiento general de la ecología de insectos.