938 resultados para Time analysis


Relevância:

60.00% 60.00%

Publicador:

Resumo:

IceCube, ein Neutrinoteleskop, welches zur Zeit am Südpol aufgebaut und voraussichtlich 2011 fertiggestellt sein wird, kann galaktische Kernkollaps-Supernovae mit hoher Signifikanz und unübertroffener statistischer Genauigkeit der Neutrinolichtkurve detektieren. Derartige Supernovae werden begleitet von einem massiven Ausbruch niederenergetischer Neutrinos aller Flavour. Beim Durchfliegen des Detektormediums Eis entstehen Positronen und Elektronen, welche wiederum lokale Tscherenkowlichtschauer produzieren, die in ihrer Summe das gesamte Eis erleuchten. Ein Nachweis ist somit, trotz der Optimierung IceCubes auf hochenergetische Teilchenspuren, über eine kollektive Rauschratenerhöhung aller optischen Module möglich. Die vorwiegende Reaktion ist der inverse Betazerfall der Antielektronneutrinos, welcher über 90,% des gesamten Signals ausmacht.rnrnDiese Arbeit beschreibt die Implementierung und Funktionsweise der Supernova-Datennahme-Software sowie der Echtzeitanalyse, mit welcher die oben genannte Nachweismethode seit August 2007 realisiert ist. Die Messdaten der ersten zwei Jahre wurden ausgewertet und belegen ein extrem stabiles Verhalten des Detektors insgesamt sowie fast aller Lichtsensoren, die eine gemittelte Ausfallquote von lediglich 0,3,% aufweisen. Eine Simulation der Detektorantwort nach zwei unterschiedlichen Supernova-Modellen ergibt eine Sichtweite IceCubes, die im besten Falle bis zur 51,kpc entfernten Großen Magellanschen Wolke reicht. Leider ist der Detektor nicht in der Lage, die Deleptonisierungsspitze aufzulösen, denn Oszillationen der Neutrinoflavour innerhalb des Sterns modifizieren die Neutrinospektren ungünstig. Jedoch können modellunabhängig anhand des frühesten Signalanstiegs die inverse Massenhierarchie sowie $sin^2 2theta_{13} > 10^{-3}$ etabliert werden, falls die Entfernung zur Supernova $leq$,6,kpc beträgt. Gleiches kann durch Auswertung eines möglichen Einflusses der Erdmaterie auf die Neutrinooszillation mit Hilfe der Messung eines zweiten Neutrinodetektors erreicht werden.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Förderung der Zelladhäsion durch sogenannte biomimetische Oberflächen wird in der Medizin als vielversprechender Ansatz gesehen, um Komplikationen wie z. B. Fremdkörperreaktionen nach der Implantation entgegenzuwirken. Neben der Immobilisierung einzelner Biomoleküle wie z. B. dem RGD-Peptid, Proteinen und Wachstumsfaktoren auf verschiedenen Materialien, konzentriert man sich derzeit in der Forschung auf die Co-Immobilisierung zweier Moleküle gleichzeitig. Hierbei werden die funktionellen Gruppen z. B. von Kollagen unter Verwendung von nur einer Kopplungschemie verwendet, wodurch die Kopplungseffizienz der einzelnen Komponenten nur begrenzt kontrollierbar ist. Das Ziel der vorliegenden Arbeit war die Entwicklung eines Immobilisierungsverfahrens, welches die unabhängige Kopplung zweier Faktoren kontrolliert ermöglicht. Dabei sollten exemplarisch das adhäsionsfördernde RGD-Peptid (Arginin-Glycin-Asparaginsäure) zusammen mit dem Wachstumsfaktor VEGF (Vascular Endothelial Growth Factor) auf Titan gebunden werden. In weiteren Experimenten sollten dann die pro-adhäsiven Faktoren Fibronektin, Kollagen, Laminin und Osteopontin immobilisiert und untersucht werden. rnDie Aminofunktionalisierung von Titan durch plasma polymerisierte Allylaminschichten wurde als Grundlage für die Entwicklung des nasschemischen Co-immobilisierungsverfahren verwendet. Für eine unabhängige und getrennte Anbindung der verschiedenen Biomoleküle stand in diesem Zusammenhang die Entwicklung eines geeigneten Crosslinker Systems im Vordergrund. Die Oberflächencharakterisierung der entwickelten Oberflächen erfolgte mittels Infrarot Spektroskopie, Surface Plasmon Resonance Spektroskopie (SPR), Kontaktwinkelmessungen, Step Profiling und X-Ray Photoelectron Spektroskopie (XPS). Zur Analyse der Anbindungsprozesse in Echtzeit wurden SPR-Kinetik Messungen durchgeführt. Die biologische Funktionalität der modifizierten Oberflächen wurde in vitro an Endothelzellen (HUVECs) und Osteoblasten (HOBs) und in vivo in einem Tiermodell-System an der Tibia von Kaninchen untersucht.rnDie Ergebnisse zeigen, dass alle genannten Biomoleküle sowohl einzeln auf Titan kovalent gekoppelt als auch am Bespiel von RGD und VEGF in einem getrennten Zwei-Schritt-Verfahren co-immobilisiert werden können. Des Weiteren wurde die biologische Funktionalität der gebundenen Faktoren nachgewiesen. Im Falle der RGD modifizierten Oberflächen wurde nach 7 Tagen eine geförderte Zelladhäsion von HUVECs mit einer signifikant erhöhten Zellbesiedlungsdichte von 28,5 % (p<0,05) gezeigt, wohingegen auf reinem Titan Werte von nur 13 % beobachtet wurden. Sowohl VEGF als auch RGD/VEGF modifizierte Proben wiesen im Vergleich zu Titan schon nach 24 Stunden eine geförderte Zelladhäsion und eine signifikant erhöhte Zellbesiedlungsdichte auf. Bei einer Besiedlung von 7,4 % auf Titan, zeigten VEGF modifizierte Proben mit 32,3 % (p<0,001) eine deutlichere Wirkung auf HUVECs als RGD/VEGF modifizierte Proben mit 13,2 % (p<0,01). Die pro-adhäsiven Faktoren zeigten eine deutliche Stimulation der Zelladhäsion von HUVECs und HOBs im Vergleich zu reinem Titan. Die deutlich höchsten Besiedlungsdichten von HUVECs konnten auf Fibronektin mit 44,6 % (p<0,001) und Kollagen mit 39,9 % (p<0,001) nach 24 Stunden beobachtet werden. Laminin zeigte keine und Osteopontin nur eine sehr geringe Wirkung auf HUVECs. Bei Osteoblasten konnten signifikant erhöhte Besiedlungsdichten im Falle aller pro-adhäsiven Faktoren beobachtet werden, jedoch wurden die höchsten Werte nach 7 Tagen auf Kollagen mit 90,6 % (p<0,001) und Laminin mit 86,5 % (p<0,001) im Vergleich zu Titan mit 32,3 % beobachtet. Die Auswertung der Tierexperimente ergab, dass die VEGF modifizierten Osteosyntheseplatten, im Vergleich zu den reinen Titankontrollen, eine gesteigerte Knochenneubildung auslösten. Eine solche Wirkung konnte für RGD/VEGF modifizierte Implantate nicht beobachtet werden. rnInsgesamt konnte gezeigt werden, dass mittels plasmapolymerisierten Allylamin Schichten die genannten Biomoleküle sowohl einzeln gebunden als auch getrennt und kontrolliert co-immobilisiert werden können. Des Weiteren konnte eine biologische Funktionalität für alle Faktoren nach erfolgter Kopplung in vitro gezeigt werden. Wider Erwarten konnte jedoch kein zusätzlicher biologischer Effekt durch die Co-immobilisierung von RGD und VEGF im Vergleich zu den einzeln immobilisierten Faktoren gezeigt werden. Um zu einer klinischen Anwendung zu gelangen, ist es nun notwendig, das entwickelte Verfahren in Bezug auf die immobilisierten Mengen der verschiedenen Faktoren hin zu optimieren. rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advances in food transformation have dramatically increased the diversity of products on the market and, consequently, exposed consumers to a complex spectrum of bioactive nutrients whose potential risks and benefits have mostly not been confidently demonstrated. Therefore, tools are needed to efficiently screen products for selected physiological properties before they enter the market. NutriChip is an interdisciplinary modular project funded by the Swiss programme Nano-Tera, which groups scientists from several areas of research with the aim of developing analytical strategies that will enable functional screening of foods. The project focuses on postprandial inflammatory stress, which potentially contributes to the development of chronic inflammatory diseases. The first module of the NutriChip project is composed of three in vitro biochemical steps that mimic the digestion process, intestinal absorption, and subsequent modulation of immune cells by the bioavailable nutrients. The second module is a miniaturised form of the first module (gut-on-a-chip) that integrates a microfluidic-based cell co-culture system and super-resolution imaging technologies to provide a physiologically relevant fluid flow environment and allows sensitive real-time analysis of the products screened in vitro. The third module aims at validating the in vitro screening model by assessing the nutritional properties of selected food products in humans. Because of the immunomodulatory properties of milk as well as its amenability to technological transformation, dairy products have been selected as model foods. The NutriChip project reflects the opening of food and nutrition sciences to state-of-the-art technologies, a key step in the translation of transdisciplinary knowledge into nutritional advice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current concepts of synaptic fine-structure are derived from electron microscopic studies of tissue fixed by chemical fixation using aldehydes. However, chemical fixation with glutaraldehyde and paraformaldehyde and subsequent dehydration in ethanol result in uncontrolled tissue shrinkage. While electron microscopy allows for the unequivocal identification of synaptic contacts, it cannot be used for real-time analysis of structural changes at synapses. For the latter purpose advanced fluorescence microscopy techniques are to be applied which, however, do not allow for the identification of synaptic contacts. Here, two approaches are described that may overcome, at least in part, some of these drawbacks in the study of synapses. By focusing on a characteristic, easily identifiable synapse, the mossy fiber synapse in the hippocampus, we first describe high-pressure freezing of fresh tissue as a method that may be applied to study subtle changes in synaptic ultrastructure associated with functional synaptic plasticity. Next, we propose to label presynaptic mossy fiber terminals and postsynaptic complex spines on CA3 pyramidal neurons by different fluorescent dyes to allow for the real-time monitoring of these synapses in living tissue over extended periods of time. We expect these approaches to lead to new insights into the structure and function of central synapses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Characterizing the spatial scaling and dynamics of convective precipitation in mountainous terrain and the development of downscaling methods to transfer precipitation fields from one scale to another is the overall motivation for this research. Substantial progress has been made on characterizing the space-time organization of Midwestern convective systems and tropical rainfall, which has led to the development of statistical/dynamical downscaling models. Space-time analysis and downscaling of orographic precipitation has received less attention due to the complexities of topographic influences. This study uses multiscale statistical analysis to investigate the spatial scaling of organized thunderstorms that produce heavy rainfall and flooding in mountainous regions. Focus is placed on the eastern and western slopes of the Appalachian region and the Front Range of the Rocky Mountains. Parameter estimates are analyzed over time and attention is given to linking changes in the multiscale parameters with meteorological forcings and orographic influences on the rainfall. Influences of geographic regions and predominant orographic controls on trends in multiscale properties of precipitation are investigated. Spatial resolutions from 1 km to 50 km are considered. This range of spatial scales is needed to bridge typical scale gaps between distributed hydrologic models and numerical weather prediction (NWP) forecasts and attempts to address the open research problem of scaling organized thunderstorms and convection in mountainous terrain down to 1-4 km scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Bestimmung der Anzahl der Biegewechsel von laufenden Drahtseilen ist ein wesentlicher Bestandteil bei der Betriebsdaueranalyse von Seiltrieben. Auf Seilabschnitte, die während der Betriebszeit die meisten Biegewechsel erfahren, sollte bei einer Seilprüfung besonderes Augenmerk gelegt werden. Gerade bei mehrfacher Einscherung ist jedoch nicht immer von vorn herein ersichtlich, um welche Seilabschnitte es sich dabei handelt. Auf der Basis der Geometrie des mehrfach eingescherten Seiltriebs wird ein rechnergestütztes Analyseverfahren zur Ermittlung der Anzahl der Biegewechsel entlang des Drahtseils bei einem Arbeitsspiel vorgestellt.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The mismatching of alveolar ventilation and perfusion (VA/Q) is the major determinant of impaired gas exchange. The gold standard for measuring VA/Q distributions is based on measurements of the elimination and retention of infused inert gases. Conventional multiple inert gas elimination technique (MIGET) uses gas chromatography (GC) to measure the inert gas partial pressures, which requires tonometry of blood samples with a gas that can then be injected into the chromatograph. The method is laborious and requires meticulous care. A new technique based on micropore membrane inlet mass spectrometry (MMIMS) facilitates the handling of blood and gas samples and provides nearly real-time analysis. In this study we compared MIGET by GC and MMIMS in 10 piglets: 1) 3 with healthy lungs; 2) 4 with oleic acid injury; and 3) 3 with isolated left lower lobe ventilation. The different protocols ensured a large range of normal and abnormal VA/Q distributions. Eight inert gases (SF6, krypton, ethane, cyclopropane, desflurane, enflurane, diethyl ether, and acetone) were infused; six of these gases were measured with MMIMS, and six were measured with GC. We found close agreement of retention and excretion of the gases and the constructed VA/Q distributions between GC and MMIMS, and predicted PaO2 from both methods compared well with measured PaO2. VA/Q by GC produced more widely dispersed modes than MMIMS, explained in part by differences in the algorithms used to calculate VA/Q distributions. In conclusion, MMIMS enables faster measurement of VA/Q, is less demanding than GC, and produces comparable results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SUMMARY There is interest in the potential of companion animal surveillance to provide data to improve pet health and to provide early warning of environmental hazards to people. We implemented a companion animal surveillance system in Calgary, Alberta and the surrounding communities. Informatics technologies automatically extracted electronic medical records from participating veterinary practices and identified cases of enteric syndrome in the warehoused records. The data were analysed using time-series analyses and a retrospective space-time permutation scan statistic. We identified a seasonal pattern of reports of occurrences of enteric syndromes in companion animals and four statistically significant clusters of enteric syndrome cases. The cases within each cluster were examined and information about the animals involved (species, age, sex), their vaccination history, possible exposure or risk behaviour history, information about disease severity, and the aetiological diagnosis was collected. We then assessed whether the cases within the cluster were unusual and if they represented an animal or public health threat. There was often insufficient information recorded in the medical record to characterize the clusters by aetiology or exposures. Space-time analysis of companion animal enteric syndrome cases found evidence of clustering. Collection of more epidemiologically relevant data would enhance the utility of practice-based companion animal surveillance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic scheduling increases the expressive power of logic programming languages, but also introduces some overhead. In this paper we present two classes of program transformations designed to reduce this additional overhead, while preserving the operational semantics of the original programs, modulo ordering of literals woken at the same time. The first class of transformations simplifies the delay conditions while the second class moves delayed literals later in the rule body. Application of the program transformations can be automated using information provided by compile-time analysis. We provide experimental results obtained from an implementation of the proposed techniques using the CIAO prototype compiler. Our results show that the techniques can lead to substantial performance improvement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este proyecto desarrolla una metodología de toma de datos y análisis para la optimización de la producción de la fase de avance de construcción del túnel de Seiró. Este túnel forma parte de las obras de construcción de la línea ferroviaria de alta velocidad corredor norte-noroeste a su paso por la provincia de Ourense. Mediante el análisis de diversos parámetros del diseño de la voladura, en la construcción de un túnel de 73 m2 de sección, en roca granítica, se pretende optimizar el tiempo de ciclo de excavación. Se estudia la variación de la fragmentación, el contorno y el avance obtenido y la influencia sobre estos parámetros del esquema de tiro y el tipo de cuele empleados y sus repercusiones en el tiempo de ciclo. El resultado del análisis concluye que el esquema de tiro óptimo, entre los usados, está formado por 112 barrenos de 51 mm de diámetro, con un cuele de 4 barrenos vacíos, que da lugar a mejores resultados de avance, contorno y fragmentación sin afectar negativamente al tiempo de ciclo. ABSTRACT This project develops a methodology for excavation cycle optimization by collecting and analyzing operational data for Seiró Tunnel construction site. This tunnel is included in construction project of high speed railway, located in N-NW direction, in Ourense province. Analyzing several blasting design parameters from the construction of a 73 m2 tunnel, in granitic rock, allows the optimization of cycle time. Variations in fragmentation, contour and advance length and influence about those parameters from blasting design und cut type used and its impact in the cycle time. Analysis results show up the best blasting design, among used, is formed of 112 holes 51 mm diameter, and a parallel cut with 4 empty holes. This configuration achieves the best fragmentation, contour and advance length without negative effects in time cycle

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The solaR package allows for reproducible research both for photovoltaics (PV) systems performance and solar radiation. It includes a set of classes, methods and functions to calculate the sun geometry and the solar radiation incident on a photovoltaic generator and to simulate the performance of several applications of the photovoltaic energy. This package performs the whole calculation procedure from both daily and intradaily global horizontal irradiation to the final productivity of grid-connected PV systems and water pumping PV systems. It is designed using a set of S4 classes whose core is a group of slots with multivariate time series. The classes share a variety of methods to access the information and several visualization methods. In addition, the package provides a tool for the visual statistical analysis of the performance of a large PV plant composed of several systems. Although solaR is primarily designed for time series associated to a location defined by its latitude/longitude values and the temperature and irradiation conditions, it can be easily combined with spatial packages for space-time analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neuronal migration is a critical phase of brain development, where defects can lead to severe ataxia, mental retardation, and seizures. In the developing cerebellum, granule neurons turn on the gene for tissue plasminogen activator (tPA) as they begin their migration into the cerebellar molecular layer. Granule neurons both secrete tPA, an extracellular serine protease that converts the proenzyme plasminogen into the active protease plasmin, and bind tPA to their cell surface. In the nervous system, tPA activity is correlated with neurite outgrowth, neuronal migration, learning, and excitotoxic death. Here we show that compared with their normal counterparts, mice lacking the tPA gene (tPA−/−) have greater than 2-fold more migrating granule neurons in the cerebellar molecular layer during the most active phase of granule cell migration. A real-time analysis of granule cell migration in cerebellar slices of tPA−/− mice shows that granule neurons are migrating 51% as fast as granule neurons in slices from wild-type mice. These findings establish a direct role for tPA in facilitating neuronal migration, and they raise the possibility that late arriving neurons may have altered synaptic interactions.