951 resultados para Significance-driven computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human platelet lysate (PL) is a cost-effective and human source of autologous multiple and potent pro-angiogenic factors, such as vascular endothelial growth factor A (VEGF A), fibroblast growth factor b (FGF b) and angiopoietin-1. Nanocoatings previously characterized were prepared by layer-by-layer assembling incorporating PL with marine-origin polysaccharides and were shown to activate human umbilical vein endothelial cells (HUVECs). Within 20 h of incubation, the more sulfated coatings induced the HUVECS to the form tube-like structures accompanied by an increased expression of angiogenicassociated genes, such as angiopoietin-1 and VEGF A. This may be a cost-effective approach to modify 2D/3D constructs to instruct angiogenic cells towards the formation of neo-vascularization, driven by multiple and synergistic stimulations from the PL combined with sulfated polysaccharides. Statement of Significance The presence, or fast induction, of a stable and mature vasculature inside 3D constructs is crucial for new tissue formation and its viability. This has been one of the major tissue engineering challenges, limiting the dimensions of efficient tissue constructs. Many approaches based on cells, growth factors, 3D bioprinting and channel incorporation have been proposed. Herein, we explored a versatile technique, layer-by-layer assembling in combination with platelet lysate (PL), that is a cost-effective source of many potent pro-angiogenic proteins and growth factors. Results suggest that the combination of PL with sulfated polyelectrolytes might be used to introduce interfaces onto 2D/3D constructs with potential to induce the formation of cell-based tubular structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Doctoral Thesis in Information Systems and Technologies Area of Information Systems and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes and validates a model-driven software engineering technique for spreadsheets. The technique that we envision builds on the embedding of spreadsheet models under a widely used spreadsheet system. This means that we enable the creation and evolution of spreadsheet models under a spreadsheet system. More precisely, we embed ClassSheets, a visual language with a syntax similar to the one offered by common spreadsheets, that was created with the aim of specifying spreadsheets. Our embedding allows models and their conforming instances to be developed under the same environment. In practice, this convenient environment enhances evolution steps at the model level while the corresponding instance is automatically co-evolved.Finally,wehave designed and conducted an empirical study with human users in order to assess our technique in production environments. The results of this study are promising and suggest that productivity gains are realizable under our model-driven spreadsheet development setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of ubiquitous computing (ubicomp) environments raises several challenges in terms of their evaluation. Ubicomp virtual reality prototyping tools enable users to experience the system to be developed and are of great help to face those challenges, as they support developers in assessing the consequences of a design decision in the early phases of development. Given the situated nature of ubicomp environments, a particular issue to consider is the level of realism provided by the prototypes. This work presents a case study where two ubicomp prototypes, featuring different levels of immersion (desktop-based versus CAVE-based), were developed and compared. The goal was to determine the cost/benefits relation of both solutions, which provided better user experience results, and whether or not simpler solutions provide the same user experience results as more elaborate one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To analyze the effects of in-hospital reocclusion of reperfused AMI culprit coronary arteries in mortality and to identify the predictors. METHODS: The present study comprises a sample of 155 patients with AMI who underwent successful mechanical reperfusion by direct coronary angioplasty and angiographic control during hospitalization or before discharge. Patients were classified into group A: reoccluded patients (n=30) and group B: non-reoccluded patients (n=125). RESULTS: We identified in-hospital reocclusion predictors and found a greater significance in mortality among reoccluded patients (23,3% x 1.6%; p=0.00004). Silent reocclusion or typical angina at reocclusion had a good prognosis. The independent predictors of in-hospital mortality were hypertension, multiarterial lesions, totally occluded AMI culprit lesions, failed redilatation, failed redilatation in comparison with no intention to redilate, no redilatation in comparison with no atempt to redilate, and reocclusion within the first 48 to 72 hours. The decision to redilate, independently of the result, led to a 50.0% reduction in hospital mortality (p=0.0366). CONCLUSION: In-hospital AMI culprit coronary artery reocclusion had an adverse effect similar to that reported in clinical studies with high mortality rates (23.3% x 1.6%; p=0.00004). The major contribution of this study is to recommend the reopening of reoccluded AMI culprit coronary arteries as a means for the management of coronary artery reocclusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To assess the clinical significance of transient ischemic dilation of the left ventricle during myocardial perfusion scintigraphy with stress/rest sestamibi. METHODS: The study retrospectively analyzed 378 patients who underwent myocardial perfusion scintigraphy with stress/rest sestamibi, 340 of whom had a low probability of having ischemia and 38 had significant transient defects. Transient ischemic dilation was automatically calculated using Autoquant software. Sensitivity, specificity, and the positive and negative predictive values were established for each value of transient ischemic dilation. RESULTS: The values of transient ischemic dilation for the groups of low probability and significant transient defects were, respectively, 1.01 ± 0.13 and 1.18 ± 0.17. The values of transient ischemic dilation for the group with significant transient defects were significantly greater than those obtained for the group with a low probability (P<0.001). The greatest positive predictive values, around 50%, were obtained for the values of transient ischemic dilation above 1.25. CONCLUSION: The results suggest that transient ischemic dilation assessed using the stress/rest sestamibi protocol may be useful to separate patients with extensive myocardial ischemia from those without ischemia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uno de los temas centrales del proyecto concierne la naturaleza de la ciencia de la computación. La reciente aparición de esta disciplina sumada a su origen híbrido como ciencia formal y disciplina tecnológica hace que su caracterización aún no esté completa y menos aún acordada entre los científicos del área. En el trabajo Three paradigms of Computer Science de A. Eden, se presentan tres posiciones admitidamente exageradas acerca de como entender tanto el objeto de estudio (ontología) como los métodos de trabajo (metodología) y la estructura de la teoría y las justificaciones del conocimiento informático (epistemología): La llamada racionalista, la cual se basa en la idea de que los programas son fórmulas lógicas y que la forma de trabajo es deductiva, la tecnocrática que presenta a la ciencia computacional como una disciplina ingenieril y la ahi llamada científica, la cual asimilaría a la computación a las ciencias empíricas. Algunos de los problemas de ciencia de la computación están relacionados con cuestiones de filosofía de la matemática, en particular la relación entre las entidades abstractas y el mundo. Sin embargo, el carácter prescriptivo de los axiomas y teoremas de las teorías de la programación puede permitir interpretaciones alternativas y cuestionaría fuertemente la posibilidad de pensar a la ciencia de la computación como una ciencia empírica, al menos en el sentido tradicional. Por otro lado, es posible que el tipo de análisis aplicado a las ciencias de la computación propuesto en este proyecto aporte nuevas ideas para pensar problemas de filosofía de la matemática. Un ejemplo de estos posibles aportes puede verse en el trabajo de Arkoudas Computers, Justi?cation, and Mathematical Knowledge el cual echa nueva luz al problema del significado de las demostraciones matemáticas.Los objetivos del proyecto son: Caracterizar el campo de las ciencias de la computación.Evaluar los fundamentos ontológicos, epistemológicos y metodológicos de la ciencia de la computación actual.Analizar las relaciones entre las diferentes perspectivas heurísticas y epistémicas y las practicas de la programación.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vast network of hedges in Ireland provide habitats of great importance to the wildlife of the country, yet surprisingly enough, only very limited survey work has been carried out on our hedgerows in the past. Now with the implementation of the new Rural Environmental Protection Scheme, farmers will be paid to manage their hedgerows in such a way as to make them into increasingly attractive wildlife habitats. However, hedgerow management expertise seems to be somewhat lacking in Ireland and we must draw upon the knowledge of our neighbours in the E.U. where quite an amount of research has been carried out on this subject. The aim of this study is to present the relevant aspects of the research for the benefit of the people who will be involved in the administration of the Rural Environmental Protection Scheme and to anyone else involved in hedgerow management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transmission of Cherenkov light through the atmosphere is strongly influenced by the optical clarity of the atmosphere and the prevailing weather conditions. The performance of telescopes measuring this light is therefore dependent on atmospheric effects. This thesis presents software and hardware developed to implement a prototype sky monitoring system for use on the proposed next-generation gamma-ray telescope array, VERITAS. The system, consisting of a CCD camera and a far-infrared pyrometer, was successfully installed and tested on the ten metre atmospheric Cherenkov imaging telescope operated by the VERITAS Collaboration at the F.L. Whipple Observatory in Arizona. The thesis also presents the results of observations of the BL Lacertae object, 1ES1959+650, made with the Whipple ten metre telescope. The observations provide evidence for TeV gamma-ray emission from the BL Lacertae object, 1ES1959+650, at a level of more than 15 standard deviations above background. This represents the first unequivocal detection of this object at TeV energies, making it only the third extragalactic source seen at such levels of significance in this energy range. The flux variability of the source on a number of timescales is also investigated.