909 resultados para flow-based
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Sea urchin coelomocytes represent an excellent experimental model system for studying retrograde flow. Their extreme flatness allows for excellent microscopic visualization. Their discoid shape provides a radially symmetric geometry, which simplifies analysis of the flow pattern. Finally, the nonmotile nature of the cells allows for the retrograde flow to be analyzed in the absence of cell translocation. In this study we have begun an analysis of the retrograde flow mechanism by characterizing its kinetic and structural properties. The supramolecular organization of actin and myosin II was investigated using light and electron microscopic methods. Light microscopic immunolocalization was performed with anti-actin and anti-sea urchin egg myosin II antibodies, whereas transmission electron microscopy was performed on platinum replicas of critical point-dried and rotary-shadowed cytoskeletons. Coelomocytes contain a dense cortical actin network, which feeds into an extensive array of radial bundles in the interior. These actin bundles terminate in a perinuclear region, which contains a ring of myosin II bipolar minifilaments. Retrograde flow was arrested either by interfering with actin polymerization or by inhibiting myosin II function, but the pathway by which the flow was blocked was different for the two kinds of inhibitory treatments. Inhibition of actin polymerization with cytochalasin D caused the actin cytoskeleton to separate from the cell margin and undergo a finite retrograde retraction. In contrast, inhibition of myosin II function either with the wide-spectrum protein kinase inhibitor staurosporine or the myosin light chain kinase–specific inhibitor KT5926 stopped flow in the cell center, whereas normal retrograde flow continued at the cell periphery. These differential results suggest that the mechanism of retrograde flow has two, spatially segregated components. We propose a “push–pull” mechanism in which actin polymerization drives flow at the cell periphery, whereas myosin II provides the tension on the actin cytoskeleton necessary for flow in the cell interior.
Resumo:
We reexamine the Gouy phase in ballistic Airy beams (AiBs). A physical interpretation of our analysis is derived in terms of the local phase velocity and the Poynting vector streamlines. Recent experiments employing AiBs are consistent with our results. We provide an approach which potentially applies to any finite-energy paraxial wave field that lacks a beam axis.
Resumo:
Mode of access: Internet.
Resumo:
"Prepared for the Illinois Dept. of Natural Resources."
Resumo:
"July 1993"--P. [2] of cover.
Resumo:
Detection of point mutations or single nucleotide polymorphisms (SNPs) is important in relation to disease susceptibility or detection in pathogens of mutations determining drug resistance or host range. There is an emergent need for rapid detection methods amenable to point-of-care applications. The purpose of this study was to reduce to practice a novel method for SNP detection and to demonstrate that this technology can be used downstream of nucleic acid amplification. The authors used a model system to develop an oligonucleotide-based SNP detection system on nitrocellulose lateral flow strips. To optimize the assay they used cloned sequences of the herpes simplex virus-1 (HSV-1) DNA polymerase gene into which they introduced a point mutation. The assay system uses chimeric polymerase chain reaction (PCR) primers that incorporate hexameric repeat tags ("hexapet tags"). The chimeric sequences allow capture of amplified products to predefined positions on a lateral flow strip. These "hexapet" sequences have minimal cross-reactivity and allow specific hybridization-based capture of the PCR products at room temperature onto lateral flow strips that have been striped with complementary hexapet tags. The allele-specific amplification was carried out with both mutant and wild-type primer sets present in the PCR mix ("competitive" format). The resulting PCR products carried a hexapet tag that corresponded with either a wild-type or mutant sequence. The lateral flow strips are dropped into the PCR reaction tube, and mutant sequence and wild-type sequences diffuse along the strip and are captured at the corresponding position on the strip. A red line indicative of a positive reaction is visible after 1 minute. Unlike other systems that require separate reactions and strips for each target sequence, this system allows multiplex PCR reactions and multiplex detection on a single strip or other suitable substrates. Unambiguous visual discrimination of a point mutation under room temperature hybridization conditions was achieved with this model system in 10 minutes after PCR. The authors have developed a capture-based hybridization method for the detection and discrimination of HSV-1 DNA polymerase genes that contain a single nucleotide change. It has been demonstrated that the hexapet oligonucleotides can be adapted for hybridization on the lateral flow strip platform for discrimination of SNPs. This is the first step in demonstrating SNP detection on lateral flow using the hexapet oligonucleotide capture system. It is anticipated that this novel system can be widely used in point-of-care settings.
Prediction of slurry transport in SAG mills using SPH fluid flow in a dynamic DEM based porous media
Resumo:
DEM modelling of the motion of coarse fractions of the charge inside SAG mills has now been well established for more than a decade. In these models the effect of slurry has broadly been ignored due to its complexity. Smoothed particle hydrodynamics (SPH) provides a particle based method for modelling complex free surface fluid flows and is well suited to modelling fluid flow in mills. Previous modelling has demonstrated the powerful ability of SPH to capture dynamic fluid flow effects such as lifters crashing into slurry pools, fluid draining from lifters, flow through grates and pulp lifter discharge. However, all these examples were limited by the ability to model only the slurry in the mill without the charge. In this paper, we represent the charge as a dynamic porous media through which the SPH fluid is then able to flow. The porous media properties (specifically the spatial distribution of porosity and velocity) are predicted by time averaging the mill charge predicted using a large scale DEM model. This allows prediction of transient and steady state slurry distributions in the mill and allows its variation with operating parameters, slurry viscosity and slurry volume, to be explored. (C) 2006 Published by Elsevier Ltd.
Resumo:
Two new types of phenolic resin-derived synthetic carbons with bi-modal and tri-modal pore-size distributions were used as supports for Pd catalysts. The catalysts were tested in chemoselective hydrogenation and hydrodehalogenation reactions in a compact multichannel flow reactor. Bi-modal and tri-modal micro-mesoporous structures of the synthetic carbons were characterised by N2 adsorption. HR-TEM, PXRD and XPS analyses were performed for characterising the synthesised catalysts. N2 adsorption revealed that tri-modal synthetic carbon possesses a well-developed hierarchical mesoporous structure (with 6.5 nm and 42 nm pores), contributing to a larger mesopore volume than the bi-modal carbon (1.57 cm3 g-1versus 1.23 cm3 g-1). It was found that the tri-modal carbon promotes a better size distribution of Pd nanoparticles than the bi-modal carbon due to presence of hierarchical mesopore limitting the growth of Pd nanoparticles. For all the model reactions investigated, the Pd catalyst based on tri-modal synthetic carbon (Pd/triC) show high activity as well as high stability and reproducibility. The trend in reactivities of different functional groups over the Pd/triC catalyst follows a general order alkyne ≫ nitro > bromo ≫ aldehyde.
Resumo:
Peer reviewed