915 resultados para Hilbert schemes of points Poincaré polynomial Betti numbers Goettsche formula


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Helium Brayton cycles have been studied as power cycles for both fission and fusion reactors obtaining high thermal efficiency. This paper studies several technological schemes of helium Brayton cycles applied for the HiPER reactor proposal. Since HiPER integrates technologies available at short term, its working conditions results in a very low maximum temperature of the energy sources, something that limits the thermal performance of the cycle. The aim of this work is to analyze the potential of the helium Brayton cycles as power cycles for HiPER. Several helium Brayton cycle configurations have been investigated with the purpose of raising the cycle thermal efficiency under the working conditions of HiPER. The effects of inter-cooling and reheating have specifically been studied. Sensitivity analyses of the key cycle parameters and component performances on the maximum thermal efficiency have also been carried out. The addition of several inter-cooling stages in a helium Brayton cycle has allowed obtaining a maximum thermal efficiency of over 36%, and the inclusion of a reheating process may also yield an added increase of nearly 1 percentage point to reach 37%. These results confirm that helium Brayton cycles are to be considered among the power cycle candidates for HiPER.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous authors have proposed functions to quantify the degree of similarity between two fuzzy numbers using various descriptive parameters, such as the geometric distance, the distance between the centers of gravity or the perimeter. However, these similarity functions have drawback for specific situations. We propose a new similarity measure for generalized trapezoidal fuzzy numbers aimed at overcoming such drawbacks. This new measure accounts for the distance between the centers of gravity and the geometric distance but also incorporates a new term based on the shared area between the fuzzy numbers. The proposed measure is compared against other measures in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical theory of intermittency developed for return maps assumes uniform density of points reinjected from the chaotic to laminar region. Though it works fine in some model systems, there exist a number of so-called pathological cases characterized by a significant deviation of main characteristics from the values predicted on the basis of the uniform distribution. Recently, we reported on how the reinjection probability density (RPD) can be generalized. Here, we extend this methodology and apply it to different dynamical systems exhibiting anomalous type-II and type-III intermittencies. Estimation of the universal RPD is based on fitting a linear function to experimental data and requires no a priori knowledge on the dynamical model behind. We provide special fitting procedure that enables robust estimation of the RPD from relatively short data sets (dozens of points). Thus, the method is applicable for a wide variety of data sets including numerical simulations and real-life experiments. Estimated RPD enables analytic evaluation of the length of the laminar phase of intermittent behaviors. We show that the method copes well with dynamical systems exhibiting significantly different statistics reported in the literature. We also derive and classify characteristic relations between the mean laminar length and main controlling parameter in perfect agreement with data provided by numerical simulations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, an improvement of the results presented by [1] Abellanas et al. (Weak Equilibrium in a Spatial Model. International Journal of Game Theory, 40(3), 449-459) is discussed. Concretely, this paper investigates an abstract game of competition between two players that want to earn the maximum number of points from a finite set of points in the plane. It is assumed that the distribution of these points is not uniform, so an appropriate weight to each position is assigned. A definition of equilibrium which is weaker than the classical one is included in order to avoid the uniqueness of the equilibrium position typical of the Nash equilibrium in these kinds of games. The existence of this approximated equilibrium in the game is analyzed by means of computational geometry techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La embriogénesis es el proceso mediante el cual una célula se convierte en un ser un vivo. A lo largo de diferentes etapas de desarrollo, la población de células va proliferando a la vez que el embrión va tomando forma y se configura. Esto es posible gracias a la acción de varios procesos genéticos, bioquímicos y mecánicos que interaccionan y se regulan entre ellos formando un sistema complejo que se organiza a diferentes escalas espaciales y temporales. Este proceso ocurre de manera robusta y reproducible, pero también con cierta variabilidad que permite la diversidad de individuos de una misma especie. La aparición de la microscopía de fluorescencia, posible gracias a proteínas fluorescentes que pueden ser adheridas a las cadenas de expresión de las células, y los avances en la física óptica de los microscopios han permitido observar este proceso de embriogénesis in-vivo y generar secuencias de imágenes tridimensionales de alta resolución espacio-temporal. Estas imágenes permiten el estudio de los procesos de desarrollo embrionario con técnicas de análisis de imagen y de datos, reconstruyendo dichos procesos para crear la representación de un embrión digital. Una de las más actuales problemáticas en este campo es entender los procesos mecánicos, de manera aislada y en interacción con otros factores como la expresión genética, para que el embrión se desarrolle. Debido a la complejidad de estos procesos, estos problemas se afrontan mediante diferentes técnicas y escalas específicas donde, a través de experimentos, pueden hacerse y confrontarse hipótesis, obteniendo conclusiones sobre el funcionamiento de los mecanismos estudiados. Esta tesis doctoral se ha enfocado sobre esta problemática intentando mejorar las metodologías del estado del arte y con un objetivo específico: estudiar patrones de deformación que emergen del movimiento organizado de las células durante diferentes estados del desarrollo del embrión, de manera global o en tejidos concretos. Estudios se han centrado en la mecánica en relación con procesos de señalización o interacciones a nivel celular o de tejido. En este trabajo, se propone un esquema para generalizar el estudio del movimiento y las interacciones mecánicas que se desprenden del mismo a diferentes escalas espaciales y temporales. Esto permitiría no sólo estudios locales, si no estudios sistemáticos de las escalas de interacción mecánica dentro de un embrión. Por tanto, el esquema propuesto obvia las causas de generación de movimiento (fuerzas) y se centra en la cuantificación de la cinemática (deformación y esfuerzos) a partir de imágenes de forma no invasiva. Hoy en día las dificultades experimentales y metodológicas y la complejidad de los sistemas biológicos impiden una descripción mecánica completa de manera sistemática. Sin embargo, patrones de deformación muestran el resultado de diferentes factores mecánicos en interacción con otros elementos dando lugar a una organización mecánica, necesaria para el desarrollo, que puede ser cuantificado a partir de la metodología propuesta en esta tesis. La metodología asume un medio continuo descrito de forma Lagrangiana (en función de las trayectorias de puntos materiales que se mueven en el sistema en lugar de puntos espaciales) de la dinámica del movimiento, estimado a partir de las imágenes mediante métodos de seguimiento de células o de técnicas de registro de imagen. Gracias a este esquema es posible describir la deformación instantánea y acumulada respecto a un estado inicial para cualquier dominio del embrión. La aplicación de esta metodología a imágenes 3D + t del pez zebra sirvió para desvelar estructuras mecánicas que tienden a estabilizarse a lo largo del tiempo en dicho embrión, y que se organizan a una escala semejante al del mapa de diferenciación celular y con indicios de correlación con patrones de expresión genética. También se aplicó la metodología al estudio del tejido amnioserosa de la Drosophila (mosca de la fruta) durante el cierre dorsal, obteniendo indicios de un acoplamiento entre escalas subcelulares, celulares y supracelulares, que genera patrones complejos en respuesta a la fuerza generada por los esqueletos de acto-myosina. En definitiva, esta tesis doctoral propone una estrategia novedosa de análisis de la dinámica celular multi-escala que permite cuantificar patrones de manera inmediata y que además ofrece una representación que reconstruye la evolución de los procesos como los ven las células, en lugar de como son observados desde el microscopio. Esta metodología por tanto permite nuevas formas de análisis y comparación de embriones y tejidos durante la embriogénesis a partir de imágenes in-vivo. ABSTRACT The embryogenesis is the process from which a single cell turns into a living organism. Through several stages of development, the cell population proliferates at the same time the embryo shapes and the organs develop gaining their functionality. This is possible through genetic, biochemical and mechanical factors that are involved in a complex interaction of processes organized in different levels and in different spatio-temporal scales. The embryogenesis, through this complexity, develops in a robust and reproducible way, but allowing variability that makes possible the diversity of living specimens. The advances in physics of microscopes and the appearance of fluorescent proteins that can be attached to expression chains, reporting about structural and functional elements of the cell, have enabled for the in-vivo observation of embryogenesis. The imaging process results in sequences of high spatio-temporal resolution 3D+time data of the embryogenesis as a digital representation of the embryos that can be further analyzed, provided new image processing and data analysis techniques are developed. One of the most relevant and challenging lines of research in the field is the quantification of the mechanical factors and processes involved in the shaping process of the embryo and their interactions with other embryogenesis factors such as genetics. Due to the complexity of the processes, studies have focused on specific problems and scales controlled in the experiments, posing and testing hypothesis to gain new biological insight. However, methodologies are often difficult to be exported to study other biological phenomena or specimens. This PhD Thesis is framed within this paradigm of research and tries to propose a systematic methodology to quantify the emergent deformation patterns from the motion estimated in in-vivo images of embryogenesis. Thanks to this strategy it would be possible to quantify not only local mechanisms, but to discover and characterize the scales of mechanical organization within the embryo. The framework focuses on the quantification of the motion kinematics (deformation and strains), neglecting the causes of the motion (forces), from images in a non-invasive way. Experimental and methodological challenges hamper the quantification of exerted forces and the mechanical properties of tissues. However, a descriptive framework of deformation patterns provides valuable insight about the organization and scales of the mechanical interactions, along the embryo development. Such a characterization would help to improve mechanical models and progressively understand the complexity of embryogenesis. This framework relies on a Lagrangian representation of the cell dynamics system based on the trajectories of points moving along the deformation. This approach of analysis enables the reconstruction of the mechanical patterning as experienced by the cells and tissues. Thus, we can build temporal profiles of deformation along stages of development, comprising both the instantaneous events and the cumulative deformation history. The application of this framework to 3D + time data of zebrafish embryogenesis allowed us to discover mechanical profiles that stabilized through time forming structures that organize in a scale comparable to the map of cell differentiation (fate map), and also suggesting correlation with genetic patterns. The framework was also applied to the analysis of the amnioserosa tissue in the drosophila’s dorsal closure, revealing that the oscillatory contraction triggered by the acto-myosin network organized complexly coupling different scales: local force generation foci, cellular morphology control mechanisms and tissue geometrical constraints. In summary, this PhD Thesis proposes a theoretical framework for the analysis of multi-scale cell dynamics that enables to quantify automatically mechanical patterns and also offers a new representation of the embryo dynamics as experienced by cells instead of how the microscope captures instantaneously the processes. Therefore, this framework enables for new strategies of quantitative analysis and comparison between embryos and tissues during embryogenesis from in-vivo images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fourier transform-infrared/statistics models demonstrate that the malignant transformation of morphologically normal human ovarian and breast tissues involves the creation of a high degree of structural modification (disorder) in DNA, before restoration of order in distant metastases. Order–disorder transitions were revealed by methods including principal components analysis of infrared spectra in which DNA samples were represented by points in two-dimensional space. Differences between the geometric sizes of clusters of points and between their locations revealed the magnitude of the order–disorder transitions. Infrared spectra provided evidence for the types of structural changes involved. Normal ovarian DNAs formed a tight cluster comparable to that of normal human blood leukocytes. The DNAs of ovarian primary carcinomas, including those that had given rise to metastases, had a high degree of disorder, whereas the DNAs of distant metastases from ovarian carcinomas were relatively ordered. However, the spectra of the metastases were more diverse than those of normal ovarian DNAs in regions assigned to base vibrations, implying increased genetic changes. DNAs of normal female breasts were substantially disordered (e.g., compared with the human blood leukocytes) as were those of the primary carcinomas, whether or not they had metastasized. The DNAs of distant breast cancer metastases were relatively ordered. These findings evoke a unified theory of carcinogenesis in which the creation of disorder in the DNA structure is an obligatory process followed by the selection of ordered, mutated DNA forms that ultimately give rise to metastases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mice deficient in the Flk-1 receptor tyrosine kinase are known to die in utero because of defective vascular and hematopoietic development. Here, we show that flk-1−/− embryonic stem cells are nevertheless able to differentiate into hematopoietic and endothelial cells in vitro, although they give rise to a greatly reduced number of blast colonies, a measure of hemangioblast potential. Furthermore, normal numbers of hematopoietic progenitors are found in 7.5-day postcoitum flk-1−/− embryos, even though 8.5-day postcoitum flk-1−/− embryos are known to be deficient in such cells. Our results suggest that hematopoietic/endothelial progenitors arise independently of Flk-1, but that their subsequent migration and expansion require a Flk-1-mediated signal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mechanisms of bacterial pathogenesis have become an increasingly important subject as pathogens have become increasingly resistant to current antibiotics. The adhesion of microorganisms to the surface of host tissue is often a first step in pathogenesis and is a plausible target for new antiinfective agents. Examination of bacterial adhesion has been difficult both because it is polyvalent and because bacterial adhesins often recognize more than one type of cell-surface molecule. This paper describes an experimental procedure that measures the forces of adhesion resulting from the interaction of uropathogenic Escherichia coli to molecularly well defined models of cellular surfaces. This procedure uses self-assembled monolayers (SAMs) to model the surface of epithelial cells and optical tweezers to manipulate the bacteria. Optical tweezers orient the bacteria relative to the surface and, thus, limit the number of points of attachment (that is, the valency of attachment). Using this combination, it was possible to quantify the force required to break a single interaction between pilus and mannose groups linked to the SAM. These results demonstrate the deconvolution and characterization of complicated events in microbial adhesion in terms of specific molecular interactions. They also suggest that the combination of optical tweezers and appropriately functionalized SAMs is a uniquely synergistic system with which to study polyvalent adhesion of bacteria to biologically relevant surfaces and with which to screen for inhibitors of this adhesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of symmetry in the folding of proteins is discussed using energy landscape theory. An analytical argument shows it is much easier to find sequences with funneled energy landscape capable of fast folding if the structure is symmetric. The analogy with phase transitions of small clusters with magic numbers is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To improve the usefulness of in vivo mode for the investigation of the pathophysiology of human immunodeficiency virus (HIV) infection, we modified the construction of SCID mice implanted with human fetal thymus and liver (thy/liv-SCID-hu mice) so that the peripheral blood of the mice contained significant numbers of human monocytes and T cells. After inoculation with HIV-1(59), a primary patient isolate capable of infecting monocytes and T cells, the modified thy/liv-SCID-hu mice developed disseminated HIV infection that was associated with plasma viremia. The development of plasma viremia and HIV infection in thy/liv-SCID-hu mice inoculated with HIV-1(59) was inhibited by acute treatment with human interleukin (IL) 10 but not with human IL-12. The human peripheral blood mononuclear cells in these modified thy/liv-SCID-hu mice were responsive to in vivo treatment with exogenous cytokines. Human interferon gamma expression in the circulating human peripheral blood mononuclear cells was induced by treatment with IL-12 and inhibited by treatment with IL-10. Thus, these modified thy/liv-SCID-hu mice should prove to be a valuable in vivo model for examining the role of immunomodulatory therapy in modifying HIV infection. Furthermore, our demonstration of the vivo inhibitory effect of IL-10 on acute HIV infection suggests that further studies may be warranted to evaluate whether there is a role for IL-10 therapy in preventing HIV infection in individuals soon after exposure to HIV such as for children born to HIV-infected mothers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compute the E-polynomials of the moduli spaces of representations of the fundamental group of a once-punctured surface of any genus into SL(2, C), for any possible holonomy around the puncture. We follow the geometric technique introduced in [12], based on stratifying the space of representations, and on the analysis of the behavior of the E-polynomial under fibrations.